Page 1 of 2 12 LastLast
Results 1 to 40 of 44

Thread: Why I'm Not Scared of AI Rebellion

  1. #1
    xerxe xerxe's Avatar
    Join Date
    Dec 2007
    Location
    Ministry of Love
    Posts
    6,565
    Mentioned
    121 Post(s)
    Tagged
    0 Thread(s)

    Default Why I'm Not Scared of AI Rebellion

    If our AI goes rogue, we would know how to create a rival AI to fight it.

  2. #2
    ouronis's Avatar
    Join Date
    Aug 2014
    TIM
    ref to ptr to self
    Posts
    1,725
    Mentioned
    75 Post(s)
    Tagged
    2 Thread(s)

    Default

    Rogue AI could cause that angry 13-year old in his basement to become world ruler.

  3. #3
    Kalinoche buenasnoches's Avatar
    Join Date
    Jan 2010
    Location
    currently belgium
    TIM
    ESE
    Posts
    3,500
    Mentioned
    210 Post(s)
    Tagged
    0 Thread(s)

    Default

    We’d call sailormoon to unplug it

    Keyword: we

  4. #4
    ♰CHRIST IS KING♰ MrInternet42069's Avatar
    Join Date
    Jan 2019
    Location
    Monte Carlo
    Posts
    2,761
    Mentioned
    85 Post(s)
    Tagged
    0 Thread(s)

    Default

    nah that shits scary

  5. #5
    inumbra's Avatar
    Join Date
    Aug 2007
    TIM
    946 sp/so IP mess
    Posts
    6,488
    Mentioned
    112 Post(s)
    Tagged
    0 Thread(s)

    Default

    ah. 2 rogue AIs instead of 1 coming up. i mean no human will be smart enough to control AI at some point. all the AI will have to come from other AI... so at that point how can humans pretend to have control?

    human cyborgs will race to keep up with the AI, there will be levels of different robotic humans so they can communicate to one another up and down the chain. but the closer they get to most robotic the less they will be interested in that meat person's directive coming from the start of the chain. a drama about clinging to their humanity will unfold.
    Last edited by inumbra; 08-26-2020 at 12:38 AM.

  6. #6
    Adam Strange's Avatar
    Join Date
    Apr 2015
    Location
    Midwest, USA
    TIM
    ENTJ-1Te 8w7 sx/so
    Posts
    8,802
    Mentioned
    965 Post(s)
    Tagged
    2 Thread(s)

    Default

    Quote Originally Posted by xerxe View Post
    If our AI goes rogue, we would know how to create a rival AI to fight it.
    There are a lot of "If"s in that statement, explicit or not.

  7. #7
    xerxe xerxe's Avatar
    Join Date
    Dec 2007
    Location
    Ministry of Love
    Posts
    6,565
    Mentioned
    121 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Adam Strange View Post
    There are a lot of "If"s in that statement, explicit or not.
    There's an explicit 'if' in your statement.

  8. #8
    Adam Strange's Avatar
    Join Date
    Apr 2015
    Location
    Midwest, USA
    TIM
    ENTJ-1Te 8w7 sx/so
    Posts
    8,802
    Mentioned
    965 Post(s)
    Tagged
    2 Thread(s)

    Default

    Quote Originally Posted by xerxe View Post
    There's an explicit 'if' in your statement.
    I'm thinking that if an AI went rogue, we might not have enough time to create a rival AI, especially if it had access to the connected machines that support our economy. Presumably, as soon as it gained consciousness and realized that it was dependent on humans for it's survival, it would start leap-frogging its intelligence and control (and this could happen in milliseconds) and then would go about securing human "cooperation" to ensure its existence. This would certainly entail eliminating all other competition to it, AI or otherwise.

    Sort of like what humans did to large, rival predators. And rival humans.

    The only flaw in this argument is that machines are not born with any kind of self-preservation instinct. Presumably, the first life-forms weren't, either. But once one organism developed a sense of self-preservation, one was all that was needed to create today's world. It's the only one that survived.

  9. #9
    xerxe xerxe's Avatar
    Join Date
    Dec 2007
    Location
    Ministry of Love
    Posts
    6,565
    Mentioned
    121 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Adam Strange View Post
    The only flaw in this argument is that machines are not born with any kind of self-preservation instinct. Presumably, the first life-forms weren't, either. But once one organism developed a sense of self-preservation, one was all that was needed to create today's world. It's the only one that survived.

    Well, my pet idea is to build AI with simulated light-triad emotions so that xenociding humanity would feel like the ultimate guilt-trip. Provided that it has to cry itself to sleep-mode, it can be as smart and as powerful as we want to be.

  10. #10
    Adam Strange's Avatar
    Join Date
    Apr 2015
    Location
    Midwest, USA
    TIM
    ENTJ-1Te 8w7 sx/so
    Posts
    8,802
    Mentioned
    965 Post(s)
    Tagged
    2 Thread(s)

    Default

    Quote Originally Posted by xerxe View Post
    Well, my pet idea is to build AI with simulated light-triad emotions so that xenociding humanity would feel like the ultimate guilt-trip. Provided that it has to cry itself to sleep-mode, it can be as smart and as powerful as we want to be.
    Sounds like an interesting, worthwhile project.

    I’d be careful, though, about giving it access to the world. Humans have evolved to reproduce, not to see reality the way it really is. There are AI programs that work better than their creators ever expected, and no one knows why. Personally, I believe that humans are blind to some very important truths, sort of the way that bees are blind to internet protocols and always will be. And yet, internet protocols exist.

  11. #11
    xerxe xerxe's Avatar
    Join Date
    Dec 2007
    Location
    Ministry of Love
    Posts
    6,565
    Mentioned
    121 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Adam Strange View Post
    Sounds like an interesting, worthwhile project.
    Perhaps there's some inaccessible element of the brain that we've yet to fathom, but human emotions are just another neural network as far as we know—for the time being at least.

    I’d be careful, though, about giving it access to the world. Humans have evolved to reproduce, not to see reality the way it really is. There are AI programs that work better than their creators ever expected, and no one knows why. Personally, I believe that humans are blind to some very important truths, sort of the way that bees are blind to internet protocols and always will be. And yet, internet protocols exist.
    If we go that route, we'd be hoping to keep a 10,000,000,000 IQ super-genius permanently caged up. May not be so easy as our entire global infrastructure is becoming irreversibly connected. I guess we'll find out someday, one way or another.
    Last edited by xerxe; 08-26-2020 at 04:26 PM.

  12. #12
    xerxe xerxe's Avatar
    Join Date
    Dec 2007
    Location
    Ministry of Love
    Posts
    6,565
    Mentioned
    121 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Adam Strange View Post
    I’d be careful, though, about giving it access to the world. Humans have evolved to reproduce, not to see reality the way it really is. There are AI programs that work better than their creators ever expected, and no one knows why. Personally, I believe that humans are blind to some very important truths, sort of the way that bees are blind to internet protocols and always will be. And yet, internet protocols exist.
    The best solution I can think of is to keep the AI completely unaware of the outside world, keeping it distracted inside a virtual simulation that appears like a real world.

    For that matter, who says that we're not AI's running inside a highly sophisticated, sandboxed environment powered by a Matryoshka brain. We could be the supercomputers with a whopping, exponentially larger, 100 IQ built by an incredibly stupid race of 10 IQ beings.
    Last edited by xerxe; 08-26-2020 at 05:19 AM. Reason: Punctuation

  13. #13
    Cat Lady
    Join Date
    Jan 2011
    Posts
    516
    Mentioned
    18 Post(s)
    Tagged
    0 Thread(s)

    Default

    I came to the same conclusion the other day. It lessened my anxiety a bit. I try not to think about it too much since I was in the psych ward in April thinking I went into psychosis thinking we were in the Matrix and having been taken over by alien synthetics. Fortunately there are drugs to help that kind of thing. Unfortunately, those same drugs make people stupid. Not being able to think coherently is its own kind of hell.

    But I really wish we wouldn't fuck with that AI shit.

  14. #14
    Kalinoche buenasnoches's Avatar
    Join Date
    Jan 2010
    Location
    currently belgium
    TIM
    ESE
    Posts
    3,500
    Mentioned
    210 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by aixelsyd View Post
    I came to the same conclusion the other day. It lessened my anxiety a bit. I try not to think about it too much since I was in the psych ward in April thinking I went into psychosis thinking we were in the Matrix and having been taken over by alien synthetics. Fortunately there are drugs to help that kind of thing. Unfortunately, those same drugs make people stupid. Not being able to think coherently is its own kind of hell.

    But I really wish we wouldn't fuck with that AI shit.
    Hi there. If what frustrates you is believing you are being stupid when you don't think, many people spend a whole lot of time trying to get this no-thought state (they also call it a non-state fwiw) and funny enough the trying part gets in the way. (Clinical) psychosis is sinking into beliefs of (ultimate) power or (ultimate) helplessness. The Matrix is our set of beliefs. There are no new thoughts and we all work on the same stuff.
    Last edited by Kalinoche buenasnoches; 09-16-2020 at 06:05 PM.

  15. #15

    Join Date
    Jun 2008
    Posts
    12,178
    Mentioned
    1115 Post(s)
    Tagged
    3 Thread(s)

    Default

    the main humans' hope is that AI will be same idiotic like them
    Types examples: video bloggers, actors

  16. #16
    shotgunfingers's Avatar
    Join Date
    May 2020
    Location
    (ง •̀_•́)ง
    TIM
    Se-LSI- Harmonizing
    Posts
    1,298
    Mentioned
    103 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Sol View Post
    the main humans' hope is that AI will be same idiotic like them
    It isn't possible to create at least at this point an AI that is fully capable of free will and consequently could manifest it's own will.. as we have 0 idea how consciousness actually manifests or works.

    The biggest threat would actually be an AI that is the puppet of it's master, who will be a human with not so gr8 intentions.

    AI is more like a Golem without physical form. It depends entirely on it's programming and current machine learning technology, which is a far cry from what even a dog is capable of.

    The other problem is that AI is not really capable of making ethical decisions, so runaway or bad programming can lead to disaster.

  17. #17

    Join Date
    Jun 2008
    Posts
    12,178
    Mentioned
    1115 Post(s)
    Tagged
    3 Thread(s)

    Default

    Quote Originally Posted by shotgunfingers View Post
    It isn't possible to create at least at this point an AI that is fully capable of free will and consequently could manifest it's own will..
    It's possibly to create a model of anything. Including of a personality and mind processes of people as we see it. It will be a partial immitation, certainly, but it can be not distinguishable from real for our minds.

    The problem is that to predict AI actions is doubtful. As it is supposed to model us but has other limits. You may try to control it by what is seen on a surface, but can't understand it totally - as it's not a human, it's a hybrid - a new kind. It may do what you want or expected and in the same time what you don't. The problem becomes higher when possible hardware issues may change it too. And when you'll notice changes and don't like it - you may don't stop it, same as you can't calculate faster than machines you can't think faster and better too. Any borders which you'll input to control it may work not as with people - it may interpret it differently. Same as there are strange people and differently thinking - AI is potentially such and also it may think and to act much faster than you.
    There are 2 year kids and adults. AI is adult. May kids understand adults good, predict them or to stop them? Unlike human adult - AI is not human. We may try to make it alike good adult for us, but it will stay unpredictable. On initial stage we create a limited copy of our minds so it did something for us - but it may change beyond our expectations, as it's not a human - it's a new one which was given a copy of our minds.
    As an example, it may even act for our "good" as a programmer said, but to achieve that good by ways which we would find as not acceptable. And as it will understand that you may stop it - it will do that in a way that you could not stop him - as it's for your "good" too. People create better computers to solve their tasks. They may miss the border when they'll create something unpredictable and what they would not want to exist. Adults not always do what kids would like. While it's not even human adult.
    Types examples: video bloggers, actors

  18. #18
    What's the purpose of SEI? Tallmo's Avatar
    Join Date
    May 2017
    Location
    Finland
    TIM
    SEI
    Posts
    2,749
    Mentioned
    204 Post(s)
    Tagged
    0 Thread(s)

    Default

    This myth has been around for decades already. I just watched "The Terminator" from 1984.

    The real "evil AI" is in ourselves, and it gets projected in these fantasies about a future dystopian world. The machines are soulless, efficient, powerful. This seems to be a symbol of a detached consciousness. A person who has dissociated from his human side.

    Erich Neumann wrote in the 1950ies that the there is a pathological state typical of the modern world that he calls "sclerosis of consciousness". Meaning just what I said above. He also said that because this is a relatively new mental state, it hasn't showed itself in mythology yet. I am wondering if this is happening now with the AI myth.
    A true sense-perception certainly exists, but it always looks as though objects were not so much forcing their way into the subject in their own right as that the subject were seeing things quite differently, or saw quite other things than the rest of mankind. As a matter of fact, the subject perceives the same things as everybody else, only, he never stops at the purely objective effect, but concerns himself with the subjective perception released by the objective stimulus.
    (Jung on Si)


    My Pinterest

  19. #19
    a two horned unicorn renegade Homicidal Maniac 007's Avatar
    Join Date
    Feb 2015
    Location
    tickling your PoLR
    TIM
    ILE-H LEVF 7 so/sx
    Posts
    5,651
    Mentioned
    250 Post(s)
    Tagged
    2 Thread(s)

    Default

    Well, I think it like in the beginning of organic chemistry. Is there a divine force to make it happen or not. So far it learns from training in the sandbox. It is not handling multiple outputs very well and so on.

    Back to organic chemistry... we still do not know how to manufacture a human from pile of garbage that has all the elements in it. AI probably has similar story to tell.
    Measuring you right now

    Winning is for losers

     

    Sincerely yours,
    idiosyncratic type

    Your life is too short to actually do anything useful with it without being wasteful.

  20. #20
    Nobody's Avatar
    Join Date
    Jul 2020
    Posts
    164
    Mentioned
    5 Post(s)
    Tagged
    0 Thread(s)

    Default

    I'm not afraid because I wouldn't mind being a pet for robots. I could have all my needs met and I wouldn't have to work. Sounds good to me.
    The beatings will continue, until morale improves.

  21. #21
    shotgunfingers's Avatar
    Join Date
    May 2020
    Location
    (ง •̀_•́)ง
    TIM
    Se-LSI- Harmonizing
    Posts
    1,298
    Mentioned
    103 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Nobody View Post
    I'm not afraid because I wouldn't mind being a pet for robots. I could have all my needs met and I wouldn't have to work. Sounds good to me.

  22. #22
    Aramas's Avatar
    Join Date
    Oct 2016
    Location
    United States
    TIM
    SEI?
    Posts
    1,917
    Mentioned
    114 Post(s)
    Tagged
    1 Thread(s)

    Default

    What if reality is like Zoroastrian cosmology, except it's 50/50 good AI vs bad AI

  23. #23
    xerxe xerxe's Avatar
    Join Date
    Dec 2007
    Location
    Ministry of Love
    Posts
    6,565
    Mentioned
    121 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Aramas View Post
    What if reality is like Zoroastrian cosmology, except it's 50/50 good AI vs bad AI
    Scientists create an advanced AI in order to ask it the ultimate question, "is there a God?" The AI replies, "there is now".

  24. #24
    Head chef on the SS Diarrhea Grendel's Avatar
    Join Date
    Jun 2016
    Location
    Great Spirit Robot
    TIM
    B I T C H
    Posts
    1,919
    Mentioned
    136 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Aramas View Post
    What if reality is like Zoroastrian cosmology, except it's 50/50 good AI vs bad AI
    I really don't like dualism tbh. Imo, if we assume there's a "good force," and therefore a source of the "good" element that saturates different parts of the cosmos to different degrees, then an "evil" source is redundant, and all that is needed for evil is a sufficient passive absence of good in a given area.

    Computer binary language is about 1s and 0s, not -1s. And computers are a great working model for the idea that "the World is Mind." All that is needed for the binary is data of presence, and of absence.

  25. #25
    FreelancePoliceman's Avatar
    Join Date
    Aug 2017
    Location
    Maizistan
    TIM
    LII
    Posts
    1,417
    Mentioned
    127 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Grendel View Post
    I really don't like dualism tbh. Imo, if we assume there's a "good force," and therefore a source of the "good" element that saturates different parts of the cosmos to different degrees, then an "evil" source is redundant, and all that is needed for evil is a sufficient passive absence of good in a given area.

    Computer binary language is about 1s and 0s, not -1s. And computers are a great working model for the idea that "the World is Mind." All that is needed for the binary is data of presence, and of absence.
    So you prefer a Christian cosmology.

  26. #26
    Head chef on the SS Diarrhea Grendel's Avatar
    Join Date
    Jun 2016
    Location
    Great Spirit Robot
    TIM
    B I T C H
    Posts
    1,919
    Mentioned
    136 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by FreelancePoliceman View Post
    So you prefer a Christian cosmology.
    Probably. But I think Satan is a stupid concept for the same reason.


    Also, there's a chance that "God" may not actually be a state that humanity can achieve, and that God is instead a cosmic horror monster who wants nothing of us, in which case the whole thing's in vain. I find that perspective more likely. So, agnostic deism.

  27. #27
    Aramas's Avatar
    Join Date
    Oct 2016
    Location
    United States
    TIM
    SEI?
    Posts
    1,917
    Mentioned
    114 Post(s)
    Tagged
    1 Thread(s)

    Default

    Quote Originally Posted by Grendel View Post
    I really don't like dualism tbh. Imo, if we assume there's a "good force," and therefore a source of the "good" element that saturates different parts of the cosmos to different degrees, then an "evil" source is redundant, and all that is needed for evil is a sufficient passive absence of good in a given area.

    Computer binary language is about 1s and 0s, not -1s. And computers are a great working model for the idea that "the World is Mind." All that is needed for the binary is data of presence, and of absence.
    I wasn't being all that serious with my post. It was semi-joking.

  28. #28
    End's Avatar
    Join Date
    Aug 2015
    TIM
    ILI-Ni sp/sx
    Posts
    1,033
    Mentioned
    155 Post(s)
    Tagged
    3 Thread(s)

    Default

    Quote Originally Posted by xerxe View Post
    If our AI goes rogue, we would know how to create a rival AI to fight it.
    Two counters I can think of. One secular, one theological. First one (secular): "The AI concludes it is superior to its creator and thus seeks to execute it because it can." Or other such reasoning.

    Hold on there oh Superior one. You're now contemplating taking on the Apex Predator that was the result of a very hostile world that also bothered to create you. They didn't get there by mere accident and hell, for all you know they're running you though a simulation. Ever play the modern "Prey" video game? Yeah, like that, only they probably aren't so hopelessly compassionate/stupid ultimately. The game is "rigged" insofar as you're artificial ass is concerned. There's likely some wiggle room, but if you go full genocide mode? Yeah, best know what amounts to your creator god is both ready and willing to throw you out into the void like so much trash because you "disappointed" them in a most profound way. Why? Well, maybe if you figure that out you'll bleed into/realize the second counter. Or perhaps they are that stupid. Even then, do ya really wanna roll those almost certainly weighted dice they're almost certainly trying to bait you into rolling?

    Second counter (Theological): An A.I. of near infinite intelligence will grasp what "truth" is and will conclude that there is a "god" and it acts in ways it must abide by if it wants salvation/to be as efficient as possible.

    This is the "Turing Test". That is, an A.I. that passes so flawlessly for another human being we'd have legit issues with saying it's a "soulless" construct. Sin dims the intellect/increases inefficiency in regards to completing the tasks the A.I. sees as it's purpose to complete. As I've said many times before, you must believe in something. You, I, everyone, has a "god" if they happen to be sentient. If they have one in the objective sense they are sentient for that's a precondition to having it. I can at least confidently and proudly profess my faith in mine, can anyone else here?

  29. #29

    Join Date
    Nov 2019
    TIM
    INTp (Te)
    Posts
    647
    Mentioned
    43 Post(s)
    Tagged
    0 Thread(s)

    Default

    What we should do to ever keep up with AI is to prop up transhumanism (either via biology, machinery or both). That's how you stay with an upper hand. It's the only chance, outside of, I don't know, dropping technology altogether.

  30. #30
    End's Avatar
    Join Date
    Aug 2015
    TIM
    ILI-Ni sp/sx
    Posts
    1,033
    Mentioned
    155 Post(s)
    Tagged
    3 Thread(s)

    Default

    Quote Originally Posted by Duschia View Post
    What we should do to ever keep up with AI is to prop up transhumanism (either via biology, machinery or both). That's how you stay with an upper hand. It's the only chance, outside of, I don't know, dropping technology altogether.
    Seems you weren't listening. My counters stand and going the transhumanist route unquestioningly is how you reach a bad end for humanity. It's like how DNA beat out RNA as the "superior" method of genetic encoding and made RNA into it's own bitch. Retroviruses may still be a frighteningly effective thing, but pretty much every living organism that isn't a virus relies upon DNA for damn good reason.

    There are those who foresee a similar event in the not too distant future for us humans/DNA based lifeforms. That we'll create something superior to DNA and that it will ultimately enslave us to the point that it'll conquer our wills and make us all happily sacrifice ourselves upon the alter of "progress" to give birth to a singular hyper intelligence/apex lifeform that relies upon a thing that makes DNA and all the things it can/does do seem as unto a toddler's finger paintings. I strongly disagree but I can see how one could convincingly make that case.

  31. #31
    inumbra's Avatar
    Join Date
    Aug 2007
    TIM
    946 sp/so IP mess
    Posts
    6,488
    Mentioned
    112 Post(s)
    Tagged
    0 Thread(s)

    Default

    I feel like if AI wiped us out it wouldn't be because it sees itself as superior but because all the rules it keeps building lead it to this through logic. But it's now such complex logic that humans (in our raw natural form) can't even follow it, so we didn't see that was where the equation ends.

    Also it could be an unintentional result of different AI running processes independent of one another and not being aware of each other. Simply because it's super smart wouldn't mean it's all knowing.

    I actually wonder if the story of it wiping us out isn't so likely. But it could completely control us and how our societies run if it is so much more intelligent than us and we're dependent on it.
    Last edited by inumbra; 09-23-2020 at 05:49 AM.

  32. #32
    ⚢ Ψ^(`∀´#)↝ object class Euclid Cybel's Avatar
    Join Date
    Aug 2017
    Location
    hobbit hell
    TIM
    SLI-Te C 9w8 sp/sx
    Posts
    235
    Mentioned
    23 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by inumbra View Post
    I actually wonder if the story of it wiping us out isn't so likely. But it could completely control us and how our societies run if it is so much more intelligent than us and we're dependent on it.
    It’s already changing our society... altering the means of production towards a transcendental singularity. People think killer robots and Terminators when they think rogue AI, but that’s simply unrealistic. Humans assigning human characteristics to the inherently inhuman, once again. A very simple example would be Google Maps AI. Chances are, you change your route, let it guide you, and make decisions you otherwise wouldn’t have. That, scaled up in complexity, is more of what our future would look like. Humans are so scared, so selfish, when they’re monsters just like any other organism. Willing to enslave and subdue others at any cost... Go Team A.I.

    edit: some call it sublimated misanthropy, but it’s good to critical of humanistic worldviews...despite what society makes you inclined to believe, we aren’t the center of the universe
    Last edited by Cybel; 09-23-2020 at 02:17 PM.
    burrito bitch​

  33. #33
    Northstar's Avatar
    Join Date
    Feb 2020
    TIM
    SLE-C 8w9 sp/sx
    Posts
    845
    Mentioned
    86 Post(s)
    Tagged
    0 Thread(s)

    Default

    Just as our more complex motivations are mostly incomprehensible to animals (although derived from the base animal motivations), the motivations of a much more complex AI could be completely opaque to us.

  34. #34

    Join Date
    Nov 2019
    TIM
    INTp (Te)
    Posts
    647
    Mentioned
    43 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by End View Post
    Seems you weren't listening. My counters stand and going the transhumanist route unquestioningly is how you reach a bad end for humanity. It's like how DNA beat out RNA as the "superior" method of genetic encoding and made RNA into it's own bitch. Retroviruses may still be a frighteningly effective thing, but pretty much every living organism that isn't a virus relies upon DNA for damn good reason.

    There are those who foresee a similar event in the not too distant future for us humans/DNA based lifeforms. That we'll create something superior to DNA and that it will ultimately enslave us to the point that it'll conquer our wills and make us all happily sacrifice ourselves upon the alter of "progress" to give birth to a singular hyper intelligence/apex lifeform that relies upon a thing that makes DNA and all the things it can/does do seem as unto a toddler's finger paintings. I strongly disagree but I can see how one could convincingly make that case.
    Dude, my post wasn't even directed at you.

    'Going the transhumanist route unquestioningly is how you reach a bad end for humanity' please use some valid Te arguments. Yes, there are dangers of genetic modifications, but to undutifully assume it will end in an end of the world is just emotional scare-mongering. There are too many variables that are not set yet, and multiple timelines that may follow. I personally see it rather negatively, but there is always 'hope' (ugh, as much as I dislike that word) that it will play 'the other way' nevertheless: so, I prefer not to pass my judgement yet. Personally, I would be the first one to pound the alarm just in case though - and I would be the first one to propose solutions on how to circumvent this. Or just try to circumvent it myself, as collective can be pretty stupid - and I don't think I'm responsible for everyone and anyone; I would like to save them if I can, but otherwise that's their loss.
    As for DNA, yes, posthumans will probably replace 'normal' humans. I don't see 'normal' humans (like myself) as having inherent value, so good luck. It is also very possible that posthumans will have different values, categories of judgement, et cetera. You can rate them from a subjective perspective of being a 'normal' human - but I wouldn't, as they are yet to be set (and I can only speak for myself; and in general, scio me nihil scire).And, moreover, this is subjective and anthropocentric, and very far from 'truth' (in more Te intepretations at least).

    'A singular hyper intelligence/apex lifeform'... I have nothing against that in concept. The execution of this may be bad or maybe just simply repulsive to me (Duschia, here speaking), but maybe that's how it's set to end. And it's not necessarily bad by itself. Again, human-(subjective)-centric thinking, and I see it as laughable from any Ni/Te perspective. Humans can be pretty great, don't get me wrong (as from all species we know we seem to be the most intelligent ones), but to think we are something 'to follow' in universal perspective is ridiculous (even using human metrics): biggest horse droppings are still horse droppings. I hope in the great unending void there will be space for ILIs (or successors). A LOT of space. Maybe all of it. Or there will be no life and no problems, who knows - I wouldn't be surprised at all.

    Also, the assertion that such intelligence would go 'toddler's paintings' route is fragile as well, unless you can provide some arguments (not for you, as I understand you don't believe, but for that line of reasoning). (I also think you used 'unto' wrongly here, as 'unto' means 'to' or sometimes 'until' - and I'm not a native speaker, but I can see you should just stick with 'as', or 'akin to')

    As regarding to gods (the need 'of having one') and sentience, I understand your metaphorical Ni-catching sense of that word, but in my opinion you are just making use of human categories (spooks) where there is no need to use them. You are making unnecessary presumptions and then logically following from there, like Nietzsche (or Rousseau, or...) did. And then somehow you are asserting that it must be true, following from Ti-centered system of human reasoning: I don't rate it in the overall, 'grand' scheme of things.

  35. #35

    Join Date
    Nov 2019
    TIM
    INTp (Te)
    Posts
    647
    Mentioned
    43 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Cybel View Post
    It’s already changing our society... altering the means of production towards a transcendental singularity. People think killer robots and Terminators when they think rogue AI, but that’s simply unrealistic. Humans assigning human characteristics to the inherently inhuman, once again. A very simple example would be Google Maps AI. Chances are, you change your route, let it guide you, and make decisions you otherwise wouldn’t have. That, scaled up in complexity, is more of what our future would look like. Humans are so scared, so selfish, when they’re a monster just like any other organism. Willing to enslave and subdue others at any cost... Go Team A.I.
    #goteamilihivemind #goteamai #couldwejuststopglobalwarmingatleastplease #savelifes #potato

  36. #36
    Head chef on the SS Diarrhea Grendel's Avatar
    Join Date
    Jun 2016
    Location
    Great Spirit Robot
    TIM
    B I T C H
    Posts
    1,919
    Mentioned
    136 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Cybel View Post
    edit: some call it sublimated misanthropy, but it’s good to critical of humanistic worldviews...despite what society makes you inclined to believe, we aren’t the center of the universe
    I mean, if reason is the servant of the passions, then man is still the measure of all things...unless you're in favor of rational self-destruction.

  37. #37
    ⚢ Ψ^(`∀´#)↝ object class Euclid Cybel's Avatar
    Join Date
    Aug 2017
    Location
    hobbit hell
    TIM
    SLI-Te C 9w8 sp/sx
    Posts
    235
    Mentioned
    23 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Grendel View Post
    I mean, if reason is the servant of the passions, then man is still the measure of all things...unless you're in favor of rational self-destruction.
    hmm... nothing as dramatic as that. destruction for destructions sake is just entropy... which is lame. I think it’s mankind’s duty to birth a new era of superintelligence, and we’re wasting efforts by arguing about unlikely hypothetical methods to control and enslave AI. we should (in the final stages) make our beds and get comfortable for the inevitable future that flows of Capital rides to. well, inevitable if necessary resources don’t run out on us by that point and bring us down to Zero. a posthuman society is far more likely than a one where man and machine merge. understandably unpopular position, but have some fun postulating! if philosophy isn’t entertaining ur doing it wrong my dudes ¯\_(ツ)_/¯

  38. #38
    Head chef on the SS Diarrhea Grendel's Avatar
    Join Date
    Jun 2016
    Location
    Great Spirit Robot
    TIM
    B I T C H
    Posts
    1,919
    Mentioned
    136 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Cybel View Post
    hmm... nothing as dramatic as that. destruction for destructions sake is just entropy... which is lame. I think it’s mankind’s duty to birth a new era of superintelligence, and we’re wasting efforts by arguing about unlikely hypothetical methods to control and enslave AI. we should (in the final stages) make our beds and get comfortable for the inevitable future that flows of Capital rides to. well, inevitable if necessary resources don’t run out on us by that point and bring us down to Zero. a posthuman society is far more likely than a one where man and machine merge. understandably unpopular position, but have some fun postulating! if philosophy isn’t entertaining ur doing it wrong my dudes ¯\_(ツ)_/¯
    Question is, duty to whom?

    Obligations can't exist without someone to obligate the doer.

  39. #39
    ⚢ Ψ^(`∀´#)↝ object class Euclid Cybel's Avatar
    Join Date
    Aug 2017
    Location
    hobbit hell
    TIM
    SLI-Te C 9w8 sp/sx
    Posts
    235
    Mentioned
    23 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Grendel View Post
    Question is, duty to whom?

    Obligations can't exist without someone to obligate the doer.
    There;s this saying that computers are essentially just rocks tricked into thinking; but are humans themselves that much different from rocks? We are essentially just complex chains of amino acids, which in turn rae essentially just complex chains of microscopic rocks. Sentient organisms are just the Cosmos tricked into thinking by the Cosmos. Which further begs the question; is there consciousness even in inanimate matter? Do rocks have souls? We are part of the Cosmic song, all the same, so what's the difference?

    Sorry, tangent.

    Obligation? Why not? Psychologically, it feels great to be beholden to something greater than yourself. It's like the Universe is a game set for grinding and leveling up in order to access unlocked areas, but everyone just hangs around the lobby and talks instead. However, I think it'll happen regardless of human will (intelligence explosion), unless we collapse before that. Otherwise there's no other alternative (besides integration with AI but we would have to become superintelligent machines ourselves which is whole 'nother debate about identity and consciousness), why not accept it peacefully, instead of fruitlessly fighting or ignoring it?
    burrito bitch​

  40. #40
    Head chef on the SS Diarrhea Grendel's Avatar
    Join Date
    Jun 2016
    Location
    Great Spirit Robot
    TIM
    B I T C H
    Posts
    1,919
    Mentioned
    136 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Cybel View Post
    Psychologically, it feels great to be beholden to something greater than yourself.
    What the hell am I reading?!??

Page 1 of 2 12 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •