Results 1 to 40 of 40

Thread: Why I'm Not Scared of AI Rebellion

  1. #1
    Ксеркс, царь царей xerx's Avatar
    Join Date
    Dec 2007
    Location
    Miniluv
    Posts
    7,380
    Mentioned
    173 Post(s)
    Tagged
    0 Thread(s)

    Default Why I'm Not Scared of AI Rebellion

    If our AI goes rogue, we would know how to create a rival AI to fight it.

  2. #2
    ouronis's Avatar
    Join Date
    Aug 2014
    TIM
    ref to ptr to self
    Posts
    2,473
    Mentioned
    106 Post(s)
    Tagged
    2 Thread(s)

    Default

    Rogue AI could cause that angry 13-year old in his basement to become world ruler.

  3. #3

    Default

    We’d call sailormoon to unplug it

    Keyword: we

  4. #4
    MrInternet's Avatar
    Join Date
    Jan 2019
    TIM
    SLI
    Posts
    3,475
    Mentioned
    133 Post(s)
    Tagged
    0 Thread(s)

    Default

    nah that shits scary

  5. #5
    inumbra's Avatar
    Join Date
    Aug 2007
    TIM
    no more
    Posts
    7,035
    Mentioned
    172 Post(s)
    Tagged
    0 Thread(s)

    Default

    ah. 2 rogue AIs instead of 1 coming up. i mean no human will be smart enough to control AI at some point. all the AI will have to come from other AI... so at that point how can humans pretend to have control?

    human cyborgs will race to keep up with the AI, there will be levels of different robotic humans so they can communicate to one another up and down the chain. but the closer they get to most robotic the less they will be interested in that meat person's directive coming from the start of the chain. a drama about clinging to their humanity will unfold.
    Last edited by inumbra; 08-26-2020 at 01:38 AM.

  6. #6
    Adam Strange's Avatar
    Join Date
    Apr 2015
    Location
    Midwest, USA
    TIM
    ENTJ-1Te 8w7 sx/so
    Posts
    13,636
    Mentioned
    1339 Post(s)
    Tagged
    2 Thread(s)

    Default

    Quote Originally Posted by xerxe View Post
    If our AI goes rogue, we would know how to create a rival AI to fight it.
    There are a lot of "If"s in that statement, explicit or not.

  7. #7
    Ксеркс, царь царей xerx's Avatar
    Join Date
    Dec 2007
    Location
    Miniluv
    Posts
    7,380
    Mentioned
    173 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Adam Strange View Post
    There are a lot of "If"s in that statement, explicit or not.
    There's an explicit 'if' in your statement.

  8. #8
    Adam Strange's Avatar
    Join Date
    Apr 2015
    Location
    Midwest, USA
    TIM
    ENTJ-1Te 8w7 sx/so
    Posts
    13,636
    Mentioned
    1339 Post(s)
    Tagged
    2 Thread(s)

    Default

    Quote Originally Posted by xerxe View Post
    There's an explicit 'if' in your statement.
    I'm thinking that if an AI went rogue, we might not have enough time to create a rival AI, especially if it had access to the connected machines that support our economy. Presumably, as soon as it gained consciousness and realized that it was dependent on humans for it's survival, it would start leap-frogging its intelligence and control (and this could happen in milliseconds) and then would go about securing human "cooperation" to ensure its existence. This would certainly entail eliminating all other competition to it, AI or otherwise.

    Sort of like what humans did to large, rival predators. And rival humans.

    The only flaw in this argument is that machines are not born with any kind of self-preservation instinct. Presumably, the first life-forms weren't, either. But once one organism developed a sense of self-preservation, one was all that was needed to create today's world. It's the only one that survived.

  9. #9
    Ксеркс, царь царей xerx's Avatar
    Join Date
    Dec 2007
    Location
    Miniluv
    Posts
    7,380
    Mentioned
    173 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Adam Strange View Post
    The only flaw in this argument is that machines are not born with any kind of self-preservation instinct. Presumably, the first life-forms weren't, either. But once one organism developed a sense of self-preservation, one was all that was needed to create today's world. It's the only one that survived.

    Well, my pet idea is to build AI with simulated light-triad emotions so that xenociding humanity would feel like the ultimate guilt-trip. Provided that it has to cry itself to sleep-mode, it can be as smart and as powerful as we want to be.

  10. #10
    Adam Strange's Avatar
    Join Date
    Apr 2015
    Location
    Midwest, USA
    TIM
    ENTJ-1Te 8w7 sx/so
    Posts
    13,636
    Mentioned
    1339 Post(s)
    Tagged
    2 Thread(s)

    Default

    Quote Originally Posted by xerxe View Post
    Well, my pet idea is to build AI with simulated light-triad emotions so that xenociding humanity would feel like the ultimate guilt-trip. Provided that it has to cry itself to sleep-mode, it can be as smart and as powerful as we want to be.
    Sounds like an interesting, worthwhile project.

    I’d be careful, though, about giving it access to the world. Humans have evolved to reproduce, not to see reality the way it really is. There are AI programs that work better than their creators ever expected, and no one knows why. Personally, I believe that humans are blind to some very important truths, sort of the way that bees are blind to internet protocols and always will be. And yet, internet protocols exist.

  11. #11
    Ксеркс, царь царей xerx's Avatar
    Join Date
    Dec 2007
    Location
    Miniluv
    Posts
    7,380
    Mentioned
    173 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Adam Strange View Post
    Sounds like an interesting, worthwhile project.
    Perhaps there's some inaccessible element of the brain that we've yet to fathom, but human emotions are just another neural network as far as we know—for the time being at least.

    I’d be careful, though, about giving it access to the world. Humans have evolved to reproduce, not to see reality the way it really is. There are AI programs that work better than their creators ever expected, and no one knows why. Personally, I believe that humans are blind to some very important truths, sort of the way that bees are blind to internet protocols and always will be. And yet, internet protocols exist.
    If we go that route, we'd be hoping to keep a 10,000,000,000 IQ super-genius permanently caged up. May not be so easy as our entire global infrastructure is becoming irreversibly connected. I guess we'll find out someday, one way or another.
    Last edited by xerx; 08-26-2020 at 05:26 PM.

  12. #12
    Ксеркс, царь царей xerx's Avatar
    Join Date
    Dec 2007
    Location
    Miniluv
    Posts
    7,380
    Mentioned
    173 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Adam Strange View Post
    I’d be careful, though, about giving it access to the world. Humans have evolved to reproduce, not to see reality the way it really is. There are AI programs that work better than their creators ever expected, and no one knows why. Personally, I believe that humans are blind to some very important truths, sort of the way that bees are blind to internet protocols and always will be. And yet, internet protocols exist.
    The best solution I can think of is to keep the AI completely unaware of the outside world, keeping it distracted inside a virtual simulation that appears like a real world.

    For that matter, who says that we're not AI's running inside a highly sophisticated, sandboxed environment powered by a Matryoshka brain. We could be the supercomputers with a whopping, exponentially larger, 100 IQ built by an incredibly stupid race of 10 IQ beings.
    Last edited by xerx; 08-26-2020 at 06:19 AM. Reason: Punctuation

  13. #13
    idiot sandwich aixelsyd's Avatar
    Join Date
    Jan 2011
    TIM
    Te-SLI 5w6
    Posts
    961
    Mentioned
    33 Post(s)
    Tagged
    0 Thread(s)

    Default

    I came to the same conclusion the other day. It lessened my anxiety a bit. I try not to think about it too much since I was in the psych ward in April thinking I went into psychosis thinking we were in the Matrix and having been taken over by alien synthetics. Fortunately there are drugs to help that kind of thing. Unfortunately, those same drugs make people stupid. Not being able to think coherently is its own kind of hell.

    But I really wish we wouldn't fuck with that AI shit.

  14. #14

    Default

    Quote Originally Posted by aixelsyd View Post
    I came to the same conclusion the other day. It lessened my anxiety a bit. I try not to think about it too much since I was in the psych ward in April thinking I went into psychosis thinking we were in the Matrix and having been taken over by alien synthetics. Fortunately there are drugs to help that kind of thing. Unfortunately, those same drugs make people stupid. Not being able to think coherently is its own kind of hell.

    But I really wish we wouldn't fuck with that AI shit.
    Hi there. If what frustrates you is believing you are being stupid when you don't think, many people spend a whole lot of time trying to get this no-thought state (they also call it a non-state fwiw) and funny enough the trying part gets in the way. (Clinical) psychosis is sinking into beliefs of (ultimate) power or (ultimate) helplessness. The Matrix is our set of beliefs. There are no new thoughts and we all work on the same stuff.
    Last edited by Kalinoche buenanoche; 09-16-2020 at 07:05 PM.

  15. #15

    Join Date
    Jun 2008
    Posts
    13,191
    Mentioned
    1239 Post(s)
    Tagged
    3 Thread(s)

    Default

    the main humans' hope is that AI will be same idiotic like them
    Types examples: video bloggers, actors

  16. #16
    Haikus SGF's Avatar
    Join Date
    May 2020
    Location
    ┌П┐(ಠ_ಠ)
    TIM
    LSI-H™
    Posts
    2,169
    Mentioned
    181 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Sol View Post
    the main humans' hope is that AI will be same idiotic like them
    It isn't possible to create at least at this point an AI that is fully capable of free will and consequently could manifest it's own will.. as we have 0 idea how consciousness actually manifests or works.

    The biggest threat would actually be an AI that is the puppet of it's master, who will be a human with not so gr8 intentions.

    AI is more like a Golem without physical form. It depends entirely on it's programming and current machine learning technology, which is a far cry from what even a dog is capable of.

    The other problem is that AI is not really capable of making ethical decisions, so runaway or bad programming can lead to disaster.

  17. #17

    Join Date
    Jun 2008
    Posts
    13,191
    Mentioned
    1239 Post(s)
    Tagged
    3 Thread(s)

    Default

    Quote Originally Posted by shotgunfingers View Post
    It isn't possible to create at least at this point an AI that is fully capable of free will and consequently could manifest it's own will..
    It's possibly to create a model of anything. Including of a personality and mind processes of people as we see it. It will be a partial immitation, certainly, but it can be not distinguishable from real for our minds.

    The problem is that to predict AI actions is doubtful. As it is supposed to model us but has other limits. You may try to control it by what is seen on a surface, but can't understand it totally - as it's not a human, it's a hybrid - a new kind. It may do what you want or expected and in the same time what you don't. The problem becomes higher when possible hardware issues may change it too. And when you'll notice changes and don't like it - you may don't stop it, same as you can't calculate faster than machines you can't think faster and better too. Any borders which you'll input to control it may work not as with people - it may interpret it differently. Same as there are strange people and differently thinking - AI is potentially such and also it may think and to act much faster than you.
    There are 2 year kids and adults. AI is adult. May kids understand adults good, predict them or to stop them? Unlike human adult - AI is not human. We may try to make it alike good adult for us, but it will stay unpredictable. On initial stage we create a limited copy of our minds so it did something for us - but it may change beyond our expectations, as it's not a human - it's a new one which was given a copy of our minds.
    As an example, it may even act for our "good" as a programmer said, but to achieve that good by ways which we would find as not acceptable. And as it will understand that you may stop it - it will do that in a way that you could not stop him - as it's for your "good" too. People create better computers to solve their tasks. They may miss the border when they'll create something unpredictable and what they would not want to exist. Adults not always do what kids would like. While it's not even human adult.
    Types examples: video bloggers, actors

  18. #18
    What's the purpose of SEI? Tallmo's Avatar
    Join Date
    May 2017
    Location
    Finland
    TIM
    SEI
    Posts
    3,474
    Mentioned
    263 Post(s)
    Tagged
    0 Thread(s)

    Default

    This myth has been around for decades already. I just watched "The Terminator" from 1984.

    The real "evil AI" is in ourselves, and it gets projected in these fantasies about a future dystopian world. The machines are soulless, efficient, powerful. This seems to be a symbol of a detached consciousness. A person who has dissociated from his human side.

    Erich Neumann wrote in the 1950ies that the there is a pathological state typical of the modern world that he calls "sclerosis of consciousness". Meaning just what I said above. He also said that because this is a relatively new mental state, it hasn't showed itself in mythology yet. I am wondering if this is happening now with the AI myth.
    The decisive thing is not the reality of the object, but the reality of the subjective factor, i.e. the primordial images, which in their totality represent a psychic mirror-world. It is a mirror, however, with the peculiar capacity of representing the present contents of consciousness not in their known and customary form but in a certain sense sub specie aeternitatis, somewhat as a million-year old consciousness might see them.

    (Jung on Si)

  19. #19
    Seed my wickedness Existential Ibuprofen's Avatar
    Join Date
    Feb 2015
    Location
    Spontaneous Human Combustion
    TIM
    ILE LEVF/#unfit4lyfe
    Posts
    7,084
    Mentioned
    305 Post(s)
    Tagged
    2 Thread(s)

    Default

    Well, I think it like in the beginning of organic chemistry. Is there a divine force to make it happen or not. So far it learns from training in the sandbox. It is not handling multiple outputs very well and so on.

    Back to organic chemistry... we still do not know how to manufacture a human from pile of garbage that has all the elements in it. AI probably has similar story to tell.
    MOTTO: NEVER TRUST IN REALITY
    Winning is for losers

     

    Sincerely yours,
    idiosyncratic type
    Life is a joke but do you have a life?

  20. #20
    💩 Nobody's Avatar
    Join Date
    Jul 2020
    TIM
    POOP™
    Posts
    441
    Mentioned
    12 Post(s)
    Tagged
    0 Thread(s)

    Default

    I'm not afraid because I wouldn't mind being a pet for robots. I could have all my needs met and I wouldn't have to work. Sounds good to me.
    Quote Originally Posted by Aramas View Post
    Just rename this place Beta Central lmao
    Quote Originally Posted by MidnightWilderness View Post
    The only problem socionics has given me is a propensity to analyze every relationship from the lens of socionics and I also see that it is worse in my boyfriend. Nothing makes any sense that way and it does not really solve any problems.





  21. #21
    Haikus SGF's Avatar
    Join Date
    May 2020
    Location
    ┌П┐(ಠ_ಠ)
    TIM
    LSI-H™
    Posts
    2,169
    Mentioned
    181 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Nobody View Post
    I'm not afraid because I wouldn't mind being a pet for robots. I could have all my needs met and I wouldn't have to work. Sounds good to me.

  22. #22
    Aramas's Avatar
    Join Date
    Oct 2016
    Location
    United States
    Posts
    2,263
    Mentioned
    126 Post(s)
    Tagged
    1 Thread(s)

    Default

    What if reality is like Zoroastrian cosmology, except it's 50/50 good AI vs bad AI

  23. #23
    Ксеркс, царь царей xerx's Avatar
    Join Date
    Dec 2007
    Location
    Miniluv
    Posts
    7,380
    Mentioned
    173 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Aramas View Post
    What if reality is like Zoroastrian cosmology, except it's 50/50 good AI vs bad AI
    Scientists create an advanced AI in order to ask it the ultimate question, "is there a God?" The AI replies, "there is now".

  24. #24
    literally too angry to die Grendel's Avatar
    Join Date
    Jun 2016
    Posts
    2,266
    Mentioned
    151 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Aramas View Post
    What if reality is like Zoroastrian cosmology, except it's 50/50 good AI vs bad AI
    I really don't like dualism tbh. Imo, if we assume there's a "good force," and therefore a source of the "good" element that saturates different parts of the cosmos to different degrees, then an "evil" source is redundant, and all that is needed for evil is a sufficient passive absence of good in a given area.

    Computer binary language is about 1s and 0s, not -1s. And computers are a great working model for the idea that "the World is Mind." All that is needed for the binary is data of presence, and of absence.

  25. #25
    FreelancePoliceman's Avatar
    Join Date
    Aug 2017
    TIM
    LII-Ne
    Posts
    4,004
    Mentioned
    364 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Grendel View Post
    I really don't like dualism tbh. Imo, if we assume there's a "good force," and therefore a source of the "good" element that saturates different parts of the cosmos to different degrees, then an "evil" source is redundant, and all that is needed for evil is a sufficient passive absence of good in a given area.

    Computer binary language is about 1s and 0s, not -1s. And computers are a great working model for the idea that "the World is Mind." All that is needed for the binary is data of presence, and of absence.
    So you prefer a Christian cosmology.
    φιλοκαλοῦμέν τε γὰρ μετ᾽ εὐτελείας καὶ φιλοσοφοῦμεν ἄνευ μαλακίας...

    I've been busy, so if I don't respond to something, feel free to ping me every day or so until I do.

  26. #26
    literally too angry to die Grendel's Avatar
    Join Date
    Jun 2016
    Posts
    2,266
    Mentioned
    151 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by FreelancePoliceman View Post
    So you prefer a Christian cosmology.
    Probably. But I think Satan is a stupid concept for the same reason.


    Also, there's a chance that "God" may not actually be a state that humanity can achieve, and that God is instead a cosmic horror monster who wants nothing of us, in which case the whole thing's in vain. I find that perspective more likely. So, agnostic deism.

  27. #27
    Aramas's Avatar
    Join Date
    Oct 2016
    Location
    United States
    Posts
    2,263
    Mentioned
    126 Post(s)
    Tagged
    1 Thread(s)

    Default

    Quote Originally Posted by Grendel View Post
    I really don't like dualism tbh. Imo, if we assume there's a "good force," and therefore a source of the "good" element that saturates different parts of the cosmos to different degrees, then an "evil" source is redundant, and all that is needed for evil is a sufficient passive absence of good in a given area.

    Computer binary language is about 1s and 0s, not -1s. And computers are a great working model for the idea that "the World is Mind." All that is needed for the binary is data of presence, and of absence.
    I wasn't being all that serious with my post. It was semi-joking.

  28. #28
    End's Avatar
    Join Date
    Aug 2015
    TIM
    ILI-Ni sp/sx
    Posts
    1,459
    Mentioned
    232 Post(s)
    Tagged
    3 Thread(s)

    Default

    Quote Originally Posted by xerxe View Post
    If our AI goes rogue, we would know how to create a rival AI to fight it.
    Two counters I can think of. One secular, one theological. First one (secular): "The AI concludes it is superior to its creator and thus seeks to execute it because it can." Or other such reasoning.

    Hold on there oh Superior one. You're now contemplating taking on the Apex Predator that was the result of a very hostile world that also bothered to create you. They didn't get there by mere accident and hell, for all you know they're running you though a simulation. Ever play the modern "Prey" video game? Yeah, like that, only they probably aren't so hopelessly compassionate/stupid ultimately. The game is "rigged" insofar as you're artificial ass is concerned. There's likely some wiggle room, but if you go full genocide mode? Yeah, best know what amounts to your creator god is both ready and willing to throw you out into the void like so much trash because you "disappointed" them in a most profound way. Why? Well, maybe if you figure that out you'll bleed into/realize the second counter. Or perhaps they are that stupid. Even then, do ya really wanna roll those almost certainly weighted dice they're almost certainly trying to bait you into rolling?

    Second counter (Theological): An A.I. of near infinite intelligence will grasp what "truth" is and will conclude that there is a "god" and it acts in ways it must abide by if it wants salvation/to be as efficient as possible.

    This is the "Turing Test". That is, an A.I. that passes so flawlessly for another human being we'd have legit issues with saying it's a "soulless" construct. Sin dims the intellect/increases inefficiency in regards to completing the tasks the A.I. sees as it's purpose to complete. As I've said many times before, you must believe in something. You, I, everyone, has a "god" if they happen to be sentient. If they have one in the objective sense they are sentient for that's a precondition to having it. I can at least confidently and proudly profess my faith in mine, can anyone else here?

  29. #29
    End's Avatar
    Join Date
    Aug 2015
    TIM
    ILI-Ni sp/sx
    Posts
    1,459
    Mentioned
    232 Post(s)
    Tagged
    3 Thread(s)

    Default

    Quote Originally Posted by Duschia View Post
    What we should do to ever keep up with AI is to prop up transhumanism (either via biology, machinery or both). That's how you stay with an upper hand. It's the only chance, outside of, I don't know, dropping technology altogether.
    Seems you weren't listening. My counters stand and going the transhumanist route unquestioningly is how you reach a bad end for humanity. It's like how DNA beat out RNA as the "superior" method of genetic encoding and made RNA into it's own bitch. Retroviruses may still be a frighteningly effective thing, but pretty much every living organism that isn't a virus relies upon DNA for damn good reason.

    There are those who foresee a similar event in the not too distant future for us humans/DNA based lifeforms. That we'll create something superior to DNA and that it will ultimately enslave us to the point that it'll conquer our wills and make us all happily sacrifice ourselves upon the alter of "progress" to give birth to a singular hyper intelligence/apex lifeform that relies upon a thing that makes DNA and all the things it can/does do seem as unto a toddler's finger paintings. I strongly disagree but I can see how one could convincingly make that case.

  30. #30
    inumbra's Avatar
    Join Date
    Aug 2007
    TIM
    no more
    Posts
    7,035
    Mentioned
    172 Post(s)
    Tagged
    0 Thread(s)

    Default

    I feel like if AI wiped us out it wouldn't be because it sees itself as superior but because all the rules it keeps building lead it to this through logic. But it's now such complex logic that humans (in our raw natural form) can't even follow it, so we didn't see that was where the equation ends.

    Also it could be an unintentional result of different AI running processes independent of one another and not being aware of each other. Simply because it's super smart wouldn't mean it's all knowing.

    I actually wonder if the story of it wiping us out isn't so likely. But it could completely control us and how our societies run if it is so much more intelligent than us and we're dependent on it.
    Last edited by inumbra; 09-23-2020 at 06:49 AM.

  31. #31
    Tzuyu's Avatar
    Join Date
    Aug 2017
    TIM
    SLI
    Posts
    473
    Mentioned
    51 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by inumbra View Post
    I actually wonder if the story of it wiping us out isn't so likely. But it could completely control us and how our societies run if it is so much more intelligent than us and we're dependent on it.
    It’s already changing our society... altering the means of production towards a transcendental singularity. People think killer robots and Terminators when they think rogue AI, but that’s simply unrealistic. Humans assigning human characteristics to the inherently inhuman, once again. A very simple example would be Google Maps AI. Chances are, you change your route, let it guide you, and make decisions you otherwise wouldn’t have. That, scaled up in complexity, is more of what our future would look like. Humans are so scared, so selfish, when they’re monsters just like any other organism. Willing to enslave and subdue others at any cost... Go Team A.I.

    edit: some call it sublimated misanthropy, but it’s good to critical of humanistic worldviews...despite what society makes you inclined to believe, we aren’t the center of the universe
    Last edited by Tzuyu; 09-23-2020 at 03:17 PM.




  32. #32
    Northstar's Avatar
    Join Date
    Feb 2020
    TIM
    SLE-C 8w9 VFEL
    Posts
    1,425
    Mentioned
    157 Post(s)
    Tagged
    0 Thread(s)

    Default

    Just as our more complex motivations are mostly incomprehensible to animals (although derived from the base animal motivations), the motivations of a much more complex AI could be completely opaque to us.

  33. #33
    literally too angry to die Grendel's Avatar
    Join Date
    Jun 2016
    Posts
    2,266
    Mentioned
    151 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Cybel View Post
    edit: some call it sublimated misanthropy, but it’s good to critical of humanistic worldviews...despite what society makes you inclined to believe, we aren’t the center of the universe
    I mean, if reason is the servant of the passions, then man is still the measure of all things...unless you're in favor of rational self-destruction.

  34. #34
    Tzuyu's Avatar
    Join Date
    Aug 2017
    TIM
    SLI
    Posts
    473
    Mentioned
    51 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Grendel View Post
    I mean, if reason is the servant of the passions, then man is still the measure of all things...unless you're in favor of rational self-destruction.
    hmm... nothing as dramatic as that. destruction for destructions sake is just entropy... which is lame. I think it’s mankind’s duty to birth a new era of superintelligence, and we’re wasting efforts by arguing about unlikely hypothetical methods to control and enslave AI. we should (in the final stages) make our beds and get comfortable for the inevitable future that flows of Capital rides to. well, inevitable if necessary resources don’t run out on us by that point and bring us down to Zero. a posthuman society is far more likely than a one where man and machine merge. understandably unpopular position, but have some fun postulating! if philosophy isn’t entertaining ur doing it wrong my dudes ¯\_(ツ)_/¯

  35. #35
    literally too angry to die Grendel's Avatar
    Join Date
    Jun 2016
    Posts
    2,266
    Mentioned
    151 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Cybel View Post
    hmm... nothing as dramatic as that. destruction for destructions sake is just entropy... which is lame. I think it’s mankind’s duty to birth a new era of superintelligence, and we’re wasting efforts by arguing about unlikely hypothetical methods to control and enslave AI. we should (in the final stages) make our beds and get comfortable for the inevitable future that flows of Capital rides to. well, inevitable if necessary resources don’t run out on us by that point and bring us down to Zero. a posthuman society is far more likely than a one where man and machine merge. understandably unpopular position, but have some fun postulating! if philosophy isn’t entertaining ur doing it wrong my dudes ¯\_(ツ)_/¯
    Question is, duty to whom?

    Obligations can't exist without someone to obligate the doer.

  36. #36
    Tzuyu's Avatar
    Join Date
    Aug 2017
    TIM
    SLI
    Posts
    473
    Mentioned
    51 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Grendel View Post
    Question is, duty to whom?

    Obligations can't exist without someone to obligate the doer.
    There;s this saying that computers are essentially just rocks tricked into thinking; but are humans themselves that much different from rocks? We are essentially just complex chains of amino acids, which in turn rae essentially just complex chains of microscopic rocks. Sentient organisms are just the Cosmos tricked into thinking by the Cosmos. Which further begs the question; is there consciousness even in inanimate matter? Do rocks have souls? We are part of the Cosmic song, all the same, so what's the difference?

    Sorry, tangent.

    Obligation? Why not? Psychologically, it feels great to be beholden to something greater than yourself. It's like the Universe is a game set for grinding and leveling up in order to access unlocked areas, but everyone just hangs around the lobby and talks instead. However, I think it'll happen regardless of human will (intelligence explosion), unless we collapse before that. Otherwise there's no other alternative (besides integration with AI but we would have to become superintelligent machines ourselves which is whole 'nother debate about identity and consciousness), why not accept it peacefully, instead of fruitlessly fighting or ignoring it?




  37. #37
    literally too angry to die Grendel's Avatar
    Join Date
    Jun 2016
    Posts
    2,266
    Mentioned
    151 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Cybel View Post
    Psychologically, it feels great to be beholden to something greater than yourself.
    What the hell am I reading?!??

  38. #38
    I don't play, I slay. Lolita's Avatar
    Join Date
    Sep 2020
    Location
    Near Whole Foods
    TIM
    SEE-N™ WPEL™ 863
    Posts
    1,160
    Mentioned
    83 Post(s)
    Tagged
    0 Thread(s)

    Default

    All the intellectual discussions aside. This is the heart of the matter:

    https://youtu.be/-cN8sJz50Ng


    Sent from my iPhone using Tapatalk

  39. #39

    Join Date
    Oct 2018
    Location
    Uranus
    Posts
    3,483
    Mentioned
    77 Post(s)
    Tagged
    0 Thread(s)

    Default

    I'm not afraid of AIs because AIs are dumb. I mean, they're called computers, but they're not even as good at math as Leibniz.

    Quote Originally Posted by Duschia
    What we should do to ever keep up with AI is to prop up transhumanism (either via biology, machinery or both).
    Mephistopheles. You at the end are - what you are.
    Put on your head perukes with a million locks,
    Put on your feet a pair of ell-high socks,
    You after all will still be - what you are.
    Faust. I feel that I have made each treasure
    Of human mind my own in vain,
    And when at last I sit me down at leisure,
    No new-born power wells up within my brain.
    I'm not a hair's-breadth more in height
    Nor nearer to the infinite.
    Mephistopheles. My good sir, you observe this matter
    As men these matters always see;
    But we must manage that much better
    Before life's pleasures from us flee.
    Your hands and feet too - what the devil!-
    Your head and seed are yours alone!
    Yet all with which I gaily revel,
    Is it on that account the less my own?
    If for six stallions I can pay,
    Aren't all their powers added to my store?
    I am a proper man and dash away
    As if the legs I had were twenty-four!
    Quick, then! Let all reflection be,
    And straight into the world with me!
    A chap who speculates - let this be said-
    Is very like a beast on moorland dry,
    That by some evil spirit round and round is led,
    While fair, green pastures round about him lie.


    What do you call a human with two robot arms, two robot legs, two robot eyes, and a brain implant? A human. I'm all for augmentation, but it doesn't make you "not human" or "transhuman." Transhumanism is just a weird religion. People usually want their AI to be a robot god anyways, yet I seem to be the only one who thinks it's fitting to quote Faust.
    Last edited by Coeruleum Blue; 09-25-2020 at 10:00 AM.

  40. #40
    Haikus
    Join Date
    May 2013
    Posts
    2,598
    Mentioned
    103 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by xerxe View Post
    If our AI goes rogue, we would know how to create a rival AI to fight it.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •