Page 5 of 6 FirstFirst 123456 LastLast
Results 161 to 200 of 208

Thread: Do you believe socionics is as valid as astrology?

  1. #161
    f.k.a Oprah sbbds's Avatar
    Join Date
    Sep 2018
    TIM
    EII typed by Gulenko
    Posts
    4,671
    Mentioned
    339 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Singu View Post
    Er, what? They're all based on scientific theories, which none are actually based on any observations.

    You are free to not believe in any theories that are not based on observations, but you will find that none actually exist.
    In your first statement you said no scientific theories are based on any observations. (Which is, holy shit. But ignoring that..)

    In your next statement you’re telling me to not believe in theories based on observations.

    Which one? It’s like you’re always drunk.

  2. #162
    f.k.a Oprah sbbds's Avatar
    Join Date
    Sep 2018
    TIM
    EII typed by Gulenko
    Posts
    4,671
    Mentioned
    339 Post(s)
    Tagged
    0 Thread(s)

    Default

    @Raver I get your point. It’s a pathetic waste of time to act like a freedom fighter against this kind of thing.

    The only case in which it makes sense is Dingu’s, because he likes to practice his English and needs to hone his logic skills.

  3. #163

    Join Date
    May 2009
    Location
    Earth
    Posts
    3,605
    Mentioned
    264 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by sbbds View Post
    In your first statement you said no scientific theories are based on any observations. (Which is, holy shit. But ignoring that..)

    In your next statement you’re telling me to not believe in theories based on observations.

    Which one? It’s like you’re always drunk.
    Either this is an issue of miscommunication, or you're not understanding what I'm talking about. Probably your Socionics assumptions exacerbate this issue.

    What I mean is that theories are not "derived" from observations, they're the explanations of observations. Because what would happen if you just "copy" a current observation? Well, think about it.

    So what are those theories based on, then? From previous theories or other theories. And where do explanations come from? From our human creativity and imagination. As Richard Feynman has said, they're "guessed". I know this is hard to "buy", but it's true.

    I said you're free to not believe in theories NOT based on observations. Double negative. But "deriving" from observations is not how theories are actually made. That's the entire problem with Logical Positivism and Induction.

  4. #164
    Farewell, comrades Not A Communist Shill's Avatar
    Join Date
    Nov 2005
    Location
    Beijing
    TIM
    TMI
    Posts
    19,136
    Mentioned
    506 Post(s)
    Tagged
    4 Thread(s)

    Default

    conjecture: (formal) A statement or an idea which is unproven, but is thought to be true; a guess.

    hypothesis: (sciences) Used loosely, a tentative conjecture explaining an observation, phenomenon or scientific problem that can be tested by further observation, investigation and/or experimentation.

    theory: (sciences) A coherent statement or set of ideas that explains observed facts or phenomena and correctly predicts new facts or phenomena not previously observed, or which sets out the laws and principles of something known or observed; a hypothesis confirmed by observation, experiment etc. [from 17th c.]

  5. #165
    f.k.a Oprah sbbds's Avatar
    Join Date
    Sep 2018
    TIM
    EII typed by Gulenko
    Posts
    4,671
    Mentioned
    339 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Singu View Post
    I said you're free to not believe in theories NOT based on observations. Double negative. But "deriving" from observations is not how theories are actually made. That's the entire problem with Logical Positivism and Induction.
    The way you say this points towards your closed circular view on the world which I’m trying to highlight.

    You basically told me “theories not based on observation”, which is what you’re advocating exists and should be believed in, DON’T EXIST, in the final part of your second statement:

    Quote Originally Posted by Singu View Post
    You are free to not believe in any theories that are not based on observations, but you will find that none actually exist.
    Bitch I caught you. Now you can look extra assholey and stupid since you chose to pin it on my miscommunication too, even after I gave you a chance to redeem yourself.

    My obvious point throughout this which you’re choosing to ignore for no apparent reason is that we live in the world and human thoughts and theories that result from it reflect and are a representation of the human experience in nature/the universe. It doesn’t matter if you come up with something that is based off of non-immediate things because the inspiration for that comes from this same source.

    Maybe if you incorporated a bit more logical positivism your grammar would improve too. Two “not”s doesn’t convert the subject of a sentence into its opposite in that case lmao.

  6. #166

    Join Date
    May 2009
    Location
    Earth
    Posts
    3,605
    Mentioned
    264 Post(s)
    Tagged
    0 Thread(s)

    Default

    Sigh... Well I will admit that that sentence might've been confusing.

    "You are free to not believe in any theories that are not based on observations" = You are free to believe in theories that are "derived" from observation.

    "but you will find that none actually exist." = But since "deriving" theories from observation is actually impossible in practice, it doesn't actually exist.

    --

    If theories are explanations, then where do you think explanations come from? They literally come from nothing other than human imagination (Note that I'm saying that they're the explanations OF observations, not that we don't ever need any observations). We don't yet know how this exactly works, but that's just what it is. If we did, then we'd have "real" AI and humanlike robots by now. Because that is precisely what makes us human. The ability to come up with explanations.

  7. #167
    f.k.a Oprah sbbds's Avatar
    Join Date
    Sep 2018
    TIM
    EII typed by Gulenko
    Posts
    4,671
    Mentioned
    339 Post(s)
    Tagged
    0 Thread(s)

    Default

    So what you’re trying to say is, you’re not actually human?

    What is an explanation to you @Singu ?

  8. #168

    Join Date
    May 2009
    Location
    Earth
    Posts
    3,605
    Mentioned
    264 Post(s)
    Tagged
    0 Thread(s)

    Default

    An explanation is basically an answer to the question, "how?" and "why?". How does it work? What does it do? What is actually there?

  9. #169
    f.k.a Oprah sbbds's Avatar
    Join Date
    Sep 2018
    TIM
    EII typed by Gulenko
    Posts
    4,671
    Mentioned
    339 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Singu View Post
    An explanation is basically an answer to the question, "how?" and "why?". How does it work? What does it do? What is actually there?
    AI can identify and trace this kind of information. As in, they can be programmed to spontaneously create it sometimes.

  10. #170
    f.k.a Oprah sbbds's Avatar
    Join Date
    Sep 2018
    TIM
    EII typed by Gulenko
    Posts
    4,671
    Mentioned
    339 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Singu View Post
    And has anything actually ever been improved?
    Quote Originally Posted by Karatos View Post
    So you don't really have any definite proof of improvement.
    Just because something has shown to happen in the past, doesn’t mean it will happen again in the future.

    That’s from Dingularity’s laws of the universe.

    Just because people haven’t showed up who were competent enough to solve problems in the past doesn’t mean they won’t in show up in the future (or that the same people now won’t eventually figure it out). Source: human civilization. Including the institution of science.

    Believing the same thing forever in spite of reasonable evidence towards the contrary is also one colloquial definition of insanity.

    Other people also believing in your guys’ ability to improve is also the reason they stick around and give you the benefit of the doubt and time of day too, in spite of your extended periods of dumb shitty behaviour. Since most people are reasonable and not insane they won’t hold onto this hope forever though.

  11. #171

    Join Date
    May 2009
    Location
    Earth
    Posts
    3,605
    Mentioned
    264 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by sbbds View Post
    AI can identify and trace this kind of information. As in, they can be programmed to spontaneously create it sometimes.
    Well, that's kind of the entire problem. You think that by gathering more and more data, and going through them, would somehow "spontaneously create" something new. Except that it kinda doesn't, all you end up having is just more and more data. It's basically "Inductivism", which is what most current AI research is based on.

    So in the same way, you think that by gathering more and more observations, you would suddenly somehow "spontaneously" come up with a new theory. Except that you needed an explanation to somehow link those observations or ideas together in a creative way, to create something new, a new theory. Just as you would come up with a new piece of art. And I think this can be done, by coming up with and answering the question to "why?" and "how?".

  12. #172
    f.k.a Oprah sbbds's Avatar
    Join Date
    Sep 2018
    TIM
    EII typed by Gulenko
    Posts
    4,671
    Mentioned
    339 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Singu View Post
    Well, that's kind of the entire problem. You think that by gathering more and more data, and going through them, would somehow "spontaneously create" something new. Except that it kinda doesn't, all you end up having is just more and more data. It's basically "Inductivism", which is what most current AI research is based on.

    So in the same way, you think that by gathering more and more observations, you would suddenly somehow "spontaneously" come up with a new theory. Except that you needed an explanation to somehow link those observations or ideas together in a creative way, to create something new, a new theory. Just as you would come up with a new piece of art. And I think this can be done, by coming up with and answering the question to "why?" and "how?".
    Sorry I don’t know why you’re rambling at me now.

    > You said that the ability to come up with explanations makes us human and not AI. Post #166.

    > I asked you what an explanation is. Post #167.

    > The answer you provided me included things that AI can do. Post #168/9. So you need to acknowledge that and not disregard my comments, and you need to come up with another definition.


    This answer you’re giving now in this quote is not how you respond to people in this situation. Not only is it rude, but your communication is unclear and convoluted. Even if you have useful things to say, nobody would want to read them at this point. Reframe your response to me, otherwise I’m not going to be able to read it.

  13. #173

    Join Date
    May 2009
    Location
    Earth
    Posts
    3,605
    Mentioned
    264 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by sbbds View Post
    This answer you’re giving now in this quote is not how you respond to people in this situation. Not only is it rude, but your communication is unclear and convoluted. Even if you have useful things to say, nobody would want to read them at this point. Reframe your response to me, otherwise I’m not going to be able to read it.
    ...That's because you were talking out of your ass. You said, "AI can already do this". And I basically answered, "No, it can't" and explained why. But you just didn't know what the hell I was talking about, because you don't understand what the problem is! Or even what the AI was supposed to be doing: How were they coming up with explanations?

  14. #174
    f.k.a Oprah sbbds's Avatar
    Join Date
    Sep 2018
    TIM
    EII typed by Gulenko
    Posts
    4,671
    Mentioned
    339 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Singu View Post
    ...That's because you were talking out of your ass. You said, "AI can already do this". And I basically answered, "No, it can't" and explained why. But you just didn't know what the hell I was talking about, because you don't understand what the problem is! Or even what the AI was supposed to be doing: How were they coming up with explanations?
    https://www.sciencemag.org/news/2018...ai-learn-child

    Another sordid reminder of our differences and why it can apparently never be.

    I work for a Japanese tech research company in the largest metropolis mankind has ever known, and you live in a hole.

  15. #175

    Join Date
    May 2009
    Location
    Earth
    Posts
    3,605
    Mentioned
    264 Post(s)
    Tagged
    0 Thread(s)

    Default

    ...That basically says exactly what I was saying:

    Researchers in machine learning argue that computers trained on mountains of data can learn just about anything—including common sense—with few, if any, programmed rules. These experts "have a blind spot, in my opinion," Marcus says. "It's a sociological thing, a form of physics envy, where people think that simpler is better." He says computer scientists are ignoring decades of work in the cognitive sciences and developmental psychology showing that humans have innate abilities—programmed instincts that appear at birth or in early childhood—that help us think abstractly and flexibly, like Chloe. He believes AI researchers ought to include such instincts in their programs.

    But in the longer term, computer scientists expect AIs to take on much tougher tasks that require flexibility and common sense. They want to create chatbots that explain the news, autonomous taxis that can handle chaotic city traffic, and robots that nurse the elderly. "If we want to build robots that can actually interact in the full human world like C-3PO," Tenenbaum says, "we're going to need to solve all of these problems in much more general settings."

    Some computer scientists are already trying. In February, MIT launched Intelligence Quest, a research initiative now raising hundreds of millions of dollars to understand human intelligence in engineering terms. Such efforts, researchers hope, will result in AIs that sit somewhere between pure machine learning and pure instinct.
    It says that's what they're hoping to do, not that they have actually achieved it.

    If an AI can learn like a baby, then it essentially is a baby.

  16. #176
    f.k.a Oprah sbbds's Avatar
    Join Date
    Sep 2018
    TIM
    EII typed by Gulenko
    Posts
    4,671
    Mentioned
    339 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Singu View Post
    ...That basically says exactly what I was saying:



    It says that's what they're hoping to do, not that they have actually achieved it.
    Read the rest of the article lol.

    Google Deep Learning, fluid intelligence and AI. It’s essentially “artistic” thinking in the way you’re describing. If that doesn’t fulfill what you had in mind then you’ll have to come up with a new definition.

  17. #177

    Join Date
    May 2009
    Location
    Earth
    Posts
    3,605
    Mentioned
    264 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by sbbds View Post
    Read the rest of the article lol.

    Google Deep Learning, fluid intelligence and AI. It’s essentially “artistic” thinking in the way you’re describing. If that doesn’t fulfill what you had in mind then you’ll have to come up with a new definition.
    Ok, here's literally the rest of the article.:

    Part of the quest will be to discover what babies know and when—lessons that can then be applied to machines. That will take time, says Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence (AI2) in Seattle, Washington. AI2 recently announced a $125 million effort to develop and test common sense in AI. "We would love to build on the representational structure innate in the human brain," Etzioni says, "but we don't understand how the brain processes language, reasoning, and knowledge."
    Yeah, very illuminating. Yet again, basically what I was saying.

  18. #178
    f.k.a Oprah sbbds's Avatar
    Join Date
    Sep 2018
    TIM
    EII typed by Gulenko
    Posts
    4,671
    Mentioned
    339 Post(s)
    Tagged
    0 Thread(s)

    Default



    Better update your coding @Singu or you’ll be getting outsourced soon.

    That’s a very very long, very interesting and recent article btw for those who haven’t heard about this yet. Which Singu only quoted a tiny fraction of in a failed attempt to support his non-arguments.

    Also in doing this you’re preventing people from having access to new accurate information, which is actually sort of malicious.

  19. #179

    Join Date
    Apr 2017
    TIM
    ILI - C
    Posts
    1,810
    Mentioned
    114 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by sbbds View Post
    Just because something has shown to happen in the past, doesn’t mean it will happen again in the future.

    That’s from Dingularity’s laws of the universe.

    Just because people haven’t showed up who were competent enough to solve problems in the past doesn’t mean they won’t in show up in the future (or that the same people now won’t eventually figure it out). Source: human civilization. Including the institution of science.

    Believing the same thing forever in spite of reasonable evidence towards the contrary is also one colloquial definition of insanity.

    Other people also believing in your guys’ ability to improve is also the reason they stick around and give you the benefit of the doubt and time of day too, in spite of your extended periods of dumb shitty behaviour. Since most people are reasonable and not insane they won’t hold onto this hope forever though.
    I mean, it's not hard to understand. The universe is governed by physical laws, and if we create theories that precisely reflect those physical laws, those theories have predictive worth. For example, the chemical equation (CH3COOH+NaHCO3=CH3COONa+CO2+H2O) implies that if we mix pure vinegar with pure baking soda in a controlled environment, then a chemical reaction will occur and create CO2. The chemical equation has predictive worth because no matter which sample of pure vinegar we use or sample of pure baking soda we use, so long as the conditions are similar, the combination will yield a chemical reaction with the same essential product (CO2).

    In contrast, Model A isn't a theory with predictive worth because it doesn't precisely reflect physical laws. For instance, if we set up an experiment to test Socionics, someone typed as an SLE isn't going to reliably respond to the same stimulus the same way twice because human beings have more moving, changing parts than what Model A implies. Our complexities are impermanent. Additionally, no 2 people typed as SLE are going to reliably respond the same way because even more complexity is involved at that point, meaning that Model A isn't universal, meaning that it doesn't precisely reflect physical mechanisms. If the DCNH theory has any validity at all, then it only affirms the notion that more variability exists among people typed the same way than what Model A implies, meaning that Model A has serious shortcomings.

  20. #180
    f.k.a Oprah sbbds's Avatar
    Join Date
    Sep 2018
    TIM
    EII typed by Gulenko
    Posts
    4,671
    Mentioned
    339 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Karatos View Post
    I mean, it's not hard to understand. The universe is governed by physical laws, and if we create theories that precisely reflect those physical laws, those theories have predictive worth. For example, the chemical equation (CH3COOH+NaHCO3=CH3COONa+CO2+H2O) implies that if we mix pure vinegar with pure baking soda in a controlled environment, then a chemical reaction will occur and create CO2. The chemical equation has predictive worth because no matter which sample of pure vinegar we use or sample of pure baking soda we use, so long as the conditions are similar, the combination will yield a chemical reaction with the same essential product (CO2).

    In contrast, Model A isn't a theory with predictive worth because it doesn't precisely reflect physical laws. For instance, if we set up an experiment to test Socionics, someone typed as an SLE isn't going to reliably respond to the same stimulus the same way twice because human beings have more moving, changing parts than what Model A implies. Our complexities are impermanent. Additionally, no 2 people typed as SLE are going to reliably respond the same way because even more complexity is involved at that point. If the DCNH theory has any validity at all, then it only affirms the notion that more variability exists among people typed the same way than what Model A implies, meaning that Model A has serious shortcomings.
    Your summary here assumes that’s the only way to test if the theory is true.

    The behavioural psych and social sciences don’t have such strict requirements either. There just needs to be statistical significance in correlations.

  21. #181

    Join Date
    Apr 2017
    TIM
    ILI - C
    Posts
    1,810
    Mentioned
    114 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by sbbds View Post
    Your summary here assumes that’s the only way to test if the theory is true.

    The behavioural psych and social sciences don’t have such strict requirements either. There just needs to be statistical significance in correlations.
    As far as Model A is concerned, we don't know if statistical significance is really significant because the Psychology-Biology gap prevents us from accounting for underlying causes.

    In contrast, behavioral psych and social science experiments frequently attempt to bypass the Psychology-Biology gap by accounting for physical factors such as genetics and upbringing, so they at least service explanations we can consider scientific.

  22. #182
    f.k.a Oprah sbbds's Avatar
    Join Date
    Sep 2018
    TIM
    EII typed by Gulenko
    Posts
    4,671
    Mentioned
    339 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Karatos View Post
    As far as Model A is concerned, we don't know if statistical significance is really significant because the Psychology-Biology gap prevents us from accounting for underlying causes.

    In contrast, behavioral psych and social science experiments frequently attempt to bypass the Psychology-Biology gap by accounting for physical factors such as genetics and upbringing, so they at least service explanations we can consider scientific.
    There is no reason we can’t test Socionics in the same way as a behavioural or at least a social science investigation would be though. This is what is done in personality psychology, which is a thing.

  23. #183
    f.k.a Oprah sbbds's Avatar
    Join Date
    Sep 2018
    TIM
    EII typed by Gulenko
    Posts
    4,671
    Mentioned
    339 Post(s)
    Tagged
    0 Thread(s)

    Default

    I’m going to be spending Christmas with my boyfriend in the countryside outside of Tokyo from 4pm JST tomorrow, and my internet connection might be spotty even with my 4G router, not to mention I need to pay attention to him. So I need to leave you ******s alone for a couple days. Don’t hurt yourselves too much in my absence.

  24. #184
    Luminous Lynx Memento Mori's Avatar
    Join Date
    Jul 2018
    TIM
    D-ESI-Se 1w2
    Posts
    307
    Mentioned
    67 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by sbbds View Post
    I’m going to be spending Christmas with my boyfriend in the countryside outside of Tokyo from 4pm JST tomorrow, and my internet connection might be spotty even with my 4G router, not to mention I need to pay attention to him. So I need to leave you ******s alone for a couple days. Don’t hurt yourselves too much in my absence.
    Merry Christmas, Sb
    "We live in an age in which there is no heroic death."


    Model A: ESI-Se -
    DCNH: Dominant

    Enneagram: 1w2, 2w1, 6w7
    Instinctual Variant: Sx/So


  25. #185
    f.k.a Oprah sbbds's Avatar
    Join Date
    Sep 2018
    TIM
    EII typed by Gulenko
    Posts
    4,671
    Mentioned
    339 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Luminous Lynx View Post
    Merry Christmas, Sb

  26. #186

    Join Date
    Apr 2017
    TIM
    ILI - C
    Posts
    1,810
    Mentioned
    114 Post(s)
    Tagged
    0 Thread(s)

    Default




  27. #187
    Seed my wickedness The Reality Denialist's Avatar
    Join Date
    Feb 2015
    Location
    Spontaneous Human Combustion
    TIM
    EIE-C-Ni ™
    Posts
    8,267
    Mentioned
    340 Post(s)
    Tagged
    2 Thread(s)

    Default

    Somehow I'm itching to bring back my "rhombic triacontahedron" and "astrocomical signs" projects back to the drawing table.
    MOTTO: NEVER TRUST IN REALITY
    Winning is for losers

     

    Sincerely yours,
    idiosyncratic type
    Life is a joke but do you have a life?

    Joinif you dare https://matrix.to/#/#The16Types:matrix.org

  28. #188

    Join Date
    May 2009
    Location
    Earth
    Posts
    3,605
    Mentioned
    264 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by sbbds View Post
    Well this "AI" sort of works in the same way as DNA has knowledge: through blind and unguided trial-and-error. The DNA doesn't "know" or understand laws of physics, it's just that whatever organism that didn't act according to the laws of physics have died, and the DNA died with it. And the organism with the correct information of laws of physics have survived. In the same way, the AI doesn't "know" anything, let alone it understands or can explain laws of physics.

    When this AI creates something that "works", then it's because we already know the correct laws of physics that is required in order for something to "work". So this AI can't tell us anything more than what we already know about laws of physics, which we must program into the virtual environment for the AI. The AI can't work outside of the laws of physics that we give to it. The AI might, through bruteforcing, blind trial-and-error, may come up with some new ways of coming up with the most efficient way of doing something, which we hadn't considered before. But it cannot tell us any new laws of physics, as it can't explain "what's there", in the same way that human beings can with our explanatory knowledge. It won't become "self-aware" and start figuring out that the AI is trapped inside of some sort of a computer program.

    Quote Originally Posted by sbbds View Post
    Also in doing this you’re preventing people from having access to new accurate information, which is actually sort of malicious.
    What do you mean?

  29. #189
    f.k.a Oprah sbbds's Avatar
    Join Date
    Sep 2018
    TIM
    EII typed by Gulenko
    Posts
    4,671
    Mentioned
    339 Post(s)
    Tagged
    0 Thread(s)

    Default

    @Singu

    What do you mean?”

    I explained myself in the same post. You uncleverly tried to misrepresent the link I posted by saying “here’s literally the rest of the article”, quoting only a tiny fraction of the extremely long article which has lots and lots of varied information. It makes you appear as if you’re more focused on looking right than you are on trying to inform, share and build on knowledge. It’s anti-informational, and shady as fuck.

    Also if you were writing an academic paper, you saying that would constitute a form of plagiarism.

  30. #190

    Join Date
    May 2009
    Location
    Earth
    Posts
    3,605
    Mentioned
    264 Post(s)
    Tagged
    0 Thread(s)

    Default

    Well you're right, I didn't realize that the article actually had more stuff down below. But it doesn't really matter, because it didn't really contain any relevant information.

  31. #191
    f.k.a Oprah sbbds's Avatar
    Join Date
    Sep 2018
    TIM
    EII typed by Gulenko
    Posts
    4,671
    Mentioned
    339 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Singu View Post
    Well you're right, I didn't realize that the article actually had more stuff down below. But it doesn't really matter, because it didn't really contain any relevant information.
    Right. Please don’t ever enter academia, because your stupidity might get you kicked out unintentionally if that’s the case LMAO.

  32. #192
    f.k.a Oprah sbbds's Avatar
    Join Date
    Sep 2018
    TIM
    EII typed by Gulenko
    Posts
    4,671
    Mentioned
    339 Post(s)
    Tagged
    0 Thread(s)

    Default

    Disregarding all of the other wrong information in your post..

    Quote Originally Posted by Singu View Post
    But it cannot tell us any new laws of physics, as it can't explain "what's there"

    Seeing as only a small minority of humans can actually do this, this is not a useful or valid way to define humanity.

    It is far more ostensible that supercomputers will be able to do this sooner than the average human can, at this rate.

  33. #193
    f.k.a Oprah sbbds's Avatar
    Join Date
    Sep 2018
    TIM
    EII typed by Gulenko
    Posts
    4,671
    Mentioned
    339 Post(s)
    Tagged
    0 Thread(s)

    Default

    Also, highlighting this for posterity:

    Quote Originally Posted by Singu View Post
    If an AI can learn like a baby, then it essentially is a baby.

  34. #194

    Join Date
    May 2009
    Location
    Earth
    Posts
    3,605
    Mentioned
    264 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by sbbds View Post
    It is far more ostensible that supercomputers will be able to do this sooner than the average human can, at this rate.
    Uh... and how should the supercomputer come up with a new laws of physics, and writing scientific papers on it?

  35. #195
    f.k.a Oprah sbbds's Avatar
    Join Date
    Sep 2018
    TIM
    EII typed by Gulenko
    Posts
    4,671
    Mentioned
    339 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Singu View Post
    Uh... and how should the supercomputer come up with a new laws of physics, and writing scientific papers on it?
    First I would program it with how to properly parse grammar, and critically analyze and disseminate information and how not to plagiarize, unlike you.

  36. #196

    Join Date
    May 2009
    Location
    Earth
    Posts
    3,605
    Mentioned
    264 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by sbbds View Post
    First I would program it with how to properly parse grammar, and critically analyze and disseminate information and how not to plagiarize, unlike you.
    Ok, so you basically have no idea.

  37. #197
    f.k.a Oprah sbbds's Avatar
    Join Date
    Sep 2018
    TIM
    EII typed by Gulenko
    Posts
    4,671
    Mentioned
    339 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Singu View Post
    Ok, so you basically have no idea.
    That was an idea for the second half of what you asked, and part of the first.

    According to you this is what defines humanity. Do you have any idea? Or are you just a shitter version of a robot?

    I think I would do the same thing current AI developments have been doing such as the video example I posted: Allow it to sense and process information creatively. Program it to build on data, yet take out rules.

  38. #198

    Join Date
    May 2009
    Location
    Earth
    Posts
    3,605
    Mentioned
    264 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by sbbds View Post
    That was an idea for the second half of what you asked, and part of the first.

    According to you this is what defines humanity. Do you have any idea? Or are you just a shitter version of a robot?

    I think I would do the same thing current AI developments have been doing such as the video example I posted: Allow it to sense and process information creatively. Program it to build on data, yet take out rules.
    My whole point was that we still don't know how people come up with explanations for things, and that is why we don't have "real" AI yet. Once we figure that out somehow, then we can start programming a "real" AI immediately.

    I already told you that the AI in the video isn't actually doing anything creative, it's just blind and unguided trial-and-error in the same way that the DNA has evolved over time with blind and unguided trial-and-error. Neither the AI nor the DNA "knows" or understands how anything works.

  39. #199
    f.k.a Oprah sbbds's Avatar
    Join Date
    Sep 2018
    TIM
    EII typed by Gulenko
    Posts
    4,671
    Mentioned
    339 Post(s)
    Tagged
    0 Thread(s)

    Default

    Some “irrelevant” information from my link, @Singu :

    Computer scientists at DeepMind in London have developed what they call interaction networks. They incorporate an assumption about the physical world: that discrete objects exist and have distinctive interactions. Just as infants quickly parse the world into interacting entities, those systems readily learn objects' properties and relationships. Their results suggest that interaction networks can predict the behavior of falling strings and balls bouncing in a box far more accurately than a generic neural network.

  40. #200
    f.k.a Oprah sbbds's Avatar
    Join Date
    Sep 2018
    TIM
    EII typed by Gulenko
    Posts
    4,671
    Mentioned
    339 Post(s)
    Tagged
    0 Thread(s)

    Default

    Another one:

    Geoffrey Hinton, a pioneer of deep learning at the University of Toronto in Canada, agrees. "Most of the people who believe in strong innate knowledge have an unfounded belief that it's hard to learn billions of parameters from scratch," he says. "I think recent progress in deep learning has shown that it is actually surprisingly easy."

Page 5 of 6 FirstFirst 123456 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •