Page 6 of 6 FirstFirst ... 23456
Results 201 to 208 of 208

Thread: Do you believe socionics is as valid as astrology?

  1. #201

    Join Date
    May 2009
    Location
    Earth
    Posts
    3,605
    Mentioned
    264 Post(s)
    Tagged
    0 Thread(s)

    Default

    I think the whole point is to not just to "learn from experience", but to actually come up with something new in the way of understanding and explaining things, as people do with scientific theories. Theories are not "derived" from any experiences.

    So we should be impressed when the AIs start writing new and original scientific papers that genuinely tell us new and surprising things about the world that has never been written before.

  2. #202
    f.k.a Oprah sbbds's Avatar
    Join Date
    Sep 2018
    TIM
    EII typed by Gulenko
    Posts
    4,671
    Mentioned
    339 Post(s)
    Tagged
    0 Thread(s)

    Default

    More:

    Vicarious, a robotics software company in San Francisco, California, is taking the idea further with what it calls schema networks. Those systems, too, assume the existence of objects and interactions, but they also try to infer the causality that connects them. By learning over time, the company's software can plan backward from desired outcomes, as people do. (I want my nose to stop itching; scratching it will probably help.) The researchers compared their method with a state-of-the-art neural network on the Atari game Breakout, in which the player slides a paddle to deflect a ball and knock out bricks. Because the schema network could learn about causal relationships—such as the fact that the ball knocks out bricks on contact no matter its velocity—it didn't need extra training when the game was altered. You could move the target bricks or make the player juggle three balls, and the schema network still aced the game. The other network flailed

    ”I N F E R . T H E . C A U S A L I T Y”.

  3. #203
    Adam Strange's Avatar
    Join Date
    Apr 2015
    Location
    Midwest, USA
    TIM
    ENTJ-1Te 8w7 sx/so
    Posts
    16,419
    Mentioned
    1574 Post(s)
    Tagged
    2 Thread(s)

    Default

    Quote Originally Posted by Singu View Post
    I think the whole point is to not just to "learn from experience", but to actually come up with something new in the way of understanding and explaining things, as people do with scientific theories. Theories are not "derived" from any experiences.

    So we should be impressed when the AIs start writing new and original scientific papers that genuinely tell us new and surprising things about the world that has never been written before.
    What if they come up with solutions and we can't understand them?

    https://www.technologyreview.com/s/6...e-heart-of-ai/

  4. #204

    Join Date
    May 2009
    Location
    Earth
    Posts
    3,605
    Mentioned
    264 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Adam Strange View Post
    What if they come up with solutions and we can't understand them?

    https://www.technologyreview.com/s/6...e-heart-of-ai/
    Well that seems like the opposite of intelligence or creativity, since it takes more effort to simplify complex things than making things ridiculously complex and convoluted that nobody could understand it. It's the work of an uncreative mind.

    So if an AI can act like a human, then just like humans can with scientific theories, they can make it simpler and simpler so that just about anyone may be able to understand it. That takes real creativity. And probably self-awareness. And/or social-awareness. Or just about any possible awareness. And it's likely that the AI can actually have a conversation with us and have a genuine debate with us.

    Still, the premise of "deep learning" is still the same: it's still "learning from experience". It can't come up with anything new, other than doing certain things more quickly than human beings or something.

  5. #205
    Adam Strange's Avatar
    Join Date
    Apr 2015
    Location
    Midwest, USA
    TIM
    ENTJ-1Te 8w7 sx/so
    Posts
    16,419
    Mentioned
    1574 Post(s)
    Tagged
    2 Thread(s)

    Default

    Quote Originally Posted by Singu View Post
    Well that seems like the opposite of intelligence or creativity, since it takes more effort to simplify complex things than making things ridiculously complex and convoluted that nobody could understand it. It's the work of an uncreative mind.

    So if an AI can act like a human, then just like humans can with scientific theories, they can make it simpler and simpler so that just about anyone may be able to understand it. That takes real creativity. And probably self-awareness. And/or social-awareness. Or just about any possible awareness. And it's likely that the AI can actually have a conversation with us and have a genuine debate with us.

    Still, the premise of "deep learning" is still the same: it's still "learning from experience". It can't come up with anything new, other than doing certain things more quickly than human beings or something.
    I regularly use programs that generate truly new designs through the use of Genetic Algorithms. These programs can start with several sheets of window glass and turn them into a Canon 800mm f5.6 L IS USM lens, and they need exactly zero experience to do this. The Wynne corrector for parabolic telescope primary mirrors was "found"/"invented" by one of these programs. The man who ran the program said that the Wynne solution that the program came up with would never have been discovered by humans, because it was so far from the productive areas of search.

    Technically, these programs aren't AI, but rather use the tricks of genetic evolution to generate their results. But the output of these programs is definitely new and original. Humans can understand genetic algorithms; we just can't do them as well as machines can.

    So machines can invent new things without knowing anything about existing experience. I see no reason why AI programs couldn't do this, too. The fact that the solution sets they come up with are being truncated by human experience (they are presently being "trained" to act like humans and non-human behavior is being rejected) does not mean that they are inherently limited to human-understandable solutions.

    But, because of the inherently complex manner in which AI machines process information, it seems unlikely that we humans will be able to "understand" how they reached a decision. These things are not James Watt's steam engine governors.

    *EDIT*
    And now, for a little entertainment, this: https://www.wired.com/story/future-o...-martha-wells/
    Last edited by Adam Strange; 12-25-2018 at 09:52 PM.

  6. #206
    f.k.a Oprah sbbds's Avatar
    Join Date
    Sep 2018
    TIM
    EII typed by Gulenko
    Posts
    4,671
    Mentioned
    339 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Singu View Post
    Well that seems like the opposite of intelligence or creativity, since it takes more effort to simplify complex things than making things ridiculously complex and convoluted that nobody could understand it. It's the work of an uncreative mind.
    .

  7. #207

    Join Date
    May 2009
    Location
    Earth
    Posts
    3,605
    Mentioned
    264 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Adam Strange View Post
    I regularly use programs that generate truly new designs through the use of Genetic Algorithms. These programs can start with several sheets of window glass and turn them into a Canon 800mm f5.6 L IS USM lens, and they need exactly zero experience to do this. The Wynne corrector for parabolic telescope primary mirrors was "found"/"invented" by one of these programs. The man who ran the program said that the Wynne solution that the program came up with would never have been discovered by humans, because it was so far from the productive areas of search.

    Technically, these programs aren't AI, but rather use the tricks of genetic evolution to generate their results. But the output of these programs is definitely new and original. Humans can understand genetic algorithms; we just can't do them as well as machines can.

    So machines can invent new things without knowing anything about existing experience. I see no reason why AI programs couldn't do this, too. The fact that the solution sets they come up with are being truncated by human experience (they are presently being "trained" to act like humans and non-human behavior is being rejected) does not mean that they are inherently limited to human-understandable solutions.

    But, because of the inherently complex manner in which AI machines process information, it seems unlikely that we humans will be able to "understand" how they reached a decision. These things are not James Watt's steam engine governors.

    *EDIT*
    And now, for a little entertainment, this:https://www.wired.com/story/future-o...-martha-wells/
    Well human knowledge is created in the same way as how DNA has evolved to have encoded knowledge: through trial-and-error. But humans don't just blindly try out every single things that they can think up of, and see which one works (but still that's the only way to create new knowledge, through guessing and see which one works). Indeed, most of those ideas are rejected even before testing them, because we see them as having nothing to do with the very thing that they're supposed to be explaining. In short, they don't make us understand them better.

    In fact, Albert Bandura has mentioned in his "vicarious learning capability" idea:

    Quote Originally Posted by Vicarious Capability
    Psychological theories have traditionally assumed that learning can occur only by performing responses and experiencing their effects. Learning through action has thus been given major, if not exclusive, priority. In actuality, virtually all learning phenomena, resulting from direct experience, can occur vicariously by observing other people's behavior and its consequences for them. The capacity to learn by observation enables people to acquire rules for generating and regulating behavioral patterns without having to form them gradually by tedious trial and error.

    The abbreviation of the acquisition process through observational learning is vital for both development and survival. Because mistakes can produce costly, or even fatal consequences, the prospects for survival would be slim indeed if one could learn only from the consequences of trial and error. For this reason, one does not teach children to swim, adolescents to drive automobiles, and novice medical students to perform surgery by having them discover the requisite behavior from the consequences of their successes and failures. The more costly and hazardous the possible mistakes, the heavier must be the reliance on observational learning from competent examplars. The less the behavior patterns draw on inborn properties, the greater is the dependence on observational learning for the functional organization of behavior.
    Of course, humans don't just blindly copy the behaviors of others by observing them as chimpanzees do, but rather they copy the meaning behind the behavior. It's also true that humans don't have to die when their ideas fail, like DNAs do.

    So the current AI may create something new and original that we hadn't considered before, just as the DNA contains many knowledge that we don't yet know, only because it has the advantage of sheer computing power on one hand, and the infinitely long amount of time on the other. But that doesn't mean that the AI or the DNA will suddenly start coming up with scientific theories and start writing scientific papers published on science journals. Those things are created from the brain of the people, which actually contain more information than the DNA does, which is pretty amazing to think about.

    So we can marvel at how the DNA has managed to create something spectacularly complex like the human eye, and think that we could never understand it or ever come up with something like it. But we did in fact understand it and come up with something like in it in the form of cameras. And we understand why something like that would work in the ways of laws of optics and how light works, while for the DNA it only gradually moved toward being the shape that it is now because that's what made the animal see things better. And the animal that couldn't see as well simply died, and the knowledge died with it. But humans can simply let bad ideas die, and we ourselves don't have to die because our brains temporarily contain bad or wrong ideas. And maybe the AI has the advantage of "resetting" whenever something doesn't work. But it still has the same problem in that it doesn't understand why something works.

    Another disadvantage of the lack of "understanding" is that it can't create any new knowledge without directly "experiencing" something or by being affected by something. It seems doubtful that for example, there will ever be an animal that could fly to the Sun, get burned, survive it and come back to Earth to retain the knowledge about the sun. But humans have knowledge about the sun without ever being remotely near to the Sun. And that's because we can apply the understanding of things that don't necessarily have to do with the Sun itself, to the sun. Like we applied the understanding of nuclear physics to the Sun. And those things will have to be understood, and not just to blindly come up with a result that "works".

  8. #208
    f.k.a Oprah sbbds's Avatar
    Join Date
    Sep 2018
    TIM
    EII typed by Gulenko
    Posts
    4,671
    Mentioned
    339 Post(s)
    Tagged
    0 Thread(s)

    Default

    I love how you’re quoting something right after I quoted it for you, not taking responsibility for obviously not having read it earlier, and not giving me credit for it @Singu .

    I’m kind of weirdly impressed/amazed tbh if you’re doing this on purpose.

    Re: your post, you are like a pair of tangled up earphones personified. Frustrating and hopelessly convoluted, but on occasion once they’ve been untangled we can hear beautiful music.

    *slow clap* lol

Page 6 of 6 FirstFirst ... 23456

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •