@Raver I get your point. It’s a pathetic waste of time to act like a freedom fighter against this kind of thing.
The only case in which it makes sense is Dingu’s, because he likes to practice his English and needs to hone his logic skills.
Either this is an issue of miscommunication, or you're not understanding what I'm talking about. Probably your Socionics assumptions exacerbate this issue.
What I mean is that theories are not "derived" from observations, they're the explanations of observations. Because what would happen if you just "copy" a current observation? Well, think about it.
So what are those theories based on, then? From previous theories or other theories. And where do explanations come from? From our human creativity and imagination. As Richard Feynman has said, they're "guessed". I know this is hard to "buy", but it's true.
I said you're free to not believe in theories NOT based on observations. Double negative. But "deriving" from observations is not how theories are actually made. That's the entire problem with Logical Positivism and Induction.
conjecture: (formal) A statement or an idea which is unproven, but is thought to be true; a guess.
hypothesis: (sciences) Used loosely, a tentative conjecture explaining an observation, phenomenon or scientific problem that can be tested by further observation, investigation and/or experimentation.
theory: (sciences) A coherent statement or set of ideas that explains observed facts or phenomena and correctly predicts new facts or phenomena not previously observed, or which sets out the laws and principles of something known or observed; a hypothesis confirmed by observation, experiment etc. [from 17th c.]
Improving your happiness and changing your personality for the better
Jungian theory is not grounded in empirical data (pdf file)
The case against type dynamics (pdf file)
Cautionary comments regarding the MBTI (pdf file)
Reinterpreting the MBTI via the five-factor model (pdf file)
Do the Big Five personality traits interact to predict life outcomes? (pdf file)
The Big Five personality test outperformed the Jungian and Enneagram test in predicting life outcomes
Evidence of correlations between human partners based on systematic reviews and meta-analyses of traits
The way you say this points towards your closed circular view on the world which I’m trying to highlight.
You basically told me “theories not based on observation”, which is what you’re advocating exists and should be believed in, DON’T EXIST, in the final part of your second statement:
Bitch I caught you. Now you can look extra assholey and stupid since you chose to pin it on my miscommunication too, even after I gave you a chance to redeem yourself.
My obvious point throughout this which you’re choosing to ignore for no apparent reason is that we live in the world and human thoughts and theories that result from it reflect and are a representation of the human experience in nature/the universe. It doesn’t matter if you come up with something that is based off of non-immediate things because the inspiration for that comes from this same source.
Maybe if you incorporated a bit more logical positivism your grammar would improve too. Two “not”s doesn’t convert the subject of a sentence into its opposite in that case lmao.
Sigh... Well I will admit that that sentence might've been confusing.
"You are free to not believe in any theories that are not based on observations" = You are free to believe in theories that are "derived" from observation.
"but you will find that none actually exist." = But since "deriving" theories from observation is actually impossible in practice, it doesn't actually exist.
--
If theories are explanations, then where do you think explanations come from? They literally come from nothing other than human imagination (Note that I'm saying that they're the explanations OF observations, not that we don't ever need any observations). We don't yet know how this exactly works, but that's just what it is. If we did, then we'd have "real" AI and humanlike robots by now. Because that is precisely what makes us human. The ability to come up with explanations.
So what you’re trying to say is, you’re not actually human?
What is an explanation to you @Singu ?
An explanation is basically an answer to the question, "how?" and "why?". How does it work? What does it do? What is actually there?
Just because something has shown to happen in the past, doesn’t mean it will happen again in the future.
That’s from Dingularity’s laws of the universe.
Just because people haven’t showed up who were competent enough to solve problems in the past doesn’t mean they won’t in show up in the future (or that the same people now won’t eventually figure it out). Source: human civilization. Including the institution of science.
Believing the same thing forever in spite of reasonable evidence towards the contrary is also one colloquial definition of insanity.
Other people also believing in your guys’ ability to improve is also the reason they stick around and give you the benefit of the doubt and time of day too, in spite of your extended periods of dumb shitty behaviour. Since most people are reasonable and not insane they won’t hold onto this hope forever though.
Well, that's kind of the entire problem. You think that by gathering more and more data, and going through them, would somehow "spontaneously create" something new. Except that it kinda doesn't, all you end up having is just more and more data. It's basically "Inductivism", which is what most current AI research is based on.
So in the same way, you think that by gathering more and more observations, you would suddenly somehow "spontaneously" come up with a new theory. Except that you needed an explanation to somehow link those observations or ideas together in a creative way, to create something new, a new theory. Just as you would come up with a new piece of art. And I think this can be done, by coming up with and answering the question to "why?" and "how?".
Sorry I don’t know why you’re rambling at me now.
> You said that the ability to come up with explanations makes us human and not AI. Post #166.
> I asked you what an explanation is. Post #167.
> The answer you provided me included things that AI can do. Post #168/9. So you need to acknowledge that and not disregard my comments, and you need to come up with another definition.
This answer you’re giving now in this quote is not how you respond to people in this situation. Not only is it rude, but your communication is unclear and convoluted. Even if you have useful things to say, nobody would want to read them at this point. Reframe your response to me, otherwise I’m not going to be able to read it.
...That's because you were talking out of your ass. You said, "AI can already do this". And I basically answered, "No, it can't" and explained why. But you just didn't know what the hell I was talking about, because you don't understand what the problem is! Or even what the AI was supposed to be doing: How were they coming up with explanations?
https://www.sciencemag.org/news/2018...ai-learn-child
Another sordid reminder of our differences and why it can apparently never be.
I work for a Japanese tech research company in the largest metropolis mankind has ever known, and you live in a hole.
...That basically says exactly what I was saying:
It says that's what they're hoping to do, not that they have actually achieved it.Researchers in machine learning argue that computers trained on mountains of data can learn just about anything—including common sense—with few, if any, programmed rules. These experts "have a blind spot, in my opinion," Marcus says. "It's a sociological thing, a form of physics envy, where people think that simpler is better." He says computer scientists are ignoring decades of work in the cognitive sciences and developmental psychology showing that humans have innate abilities—programmed instincts that appear at birth or in early childhood—that help us think abstractly and flexibly, like Chloe. He believes AI researchers ought to include such instincts in their programs.
But in the longer term, computer scientists expect AIs to take on much tougher tasks that require flexibility and common sense. They want to create chatbots that explain the news, autonomous taxis that can handle chaotic city traffic, and robots that nurse the elderly. "If we want to build robots that can actually interact in the full human world like C-3PO," Tenenbaum says, "we're going to need to solve all of these problems in much more general settings."
Some computer scientists are already trying. In February, MIT launched Intelligence Quest, a research initiative now raising hundreds of millions of dollars to understand human intelligence in engineering terms. Such efforts, researchers hope, will result in AIs that sit somewhere between pure machine learning and pure instinct.
If an AI can learn like a baby, then it essentially is a baby.
Ok, here's literally the rest of the article.:
Yeah, very illuminating. Yet again, basically what I was saying.Part of the quest will be to discover what babies know and when—lessons that can then be applied to machines. That will take time, says Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence (AI2) in Seattle, Washington. AI2 recently announced a $125 million effort to develop and test common sense in AI. "We would love to build on the representational structure innate in the human brain," Etzioni says, "but we don't understand how the brain processes language, reasoning, and knowledge."
Better update your coding @Singu or you’ll be getting outsourced soon.
That’s a very very long, very interesting and recent article btw for those who haven’t heard about this yet. Which Singu only quoted a tiny fraction of in a failed attempt to support his non-arguments.
Also in doing this you’re preventing people from having access to new accurate information, which is actually sort of malicious.
I mean, it's not hard to understand. The universe is governed by physical laws, and if we create theories that precisely reflect those physical laws, those theories have predictive worth. For example, the chemical equation (CH3COOH+NaHCO3=CH3COONa+CO2+H2O) implies that if we mix pure vinegar with pure baking soda in a controlled environment, then a chemical reaction will occur and create CO2. The chemical equation has predictive worth because no matter which sample of pure vinegar we use or sample of pure baking soda we use, so long as the conditions are similar, the combination will yield a chemical reaction with the same essential product (CO2).
In contrast, Model A isn't a theory with predictive worth because it doesn't precisely reflect physical laws. For instance, if we set up an experiment to test Socionics, someone typed as an SLE isn't going to reliably respond to the same stimulus the same way twice because human beings have more moving, changing parts than what Model A implies. Our complexities are impermanent. Additionally, no 2 people typed as SLE are going to reliably respond the same way because even more complexity is involved at that point, meaning that Model A isn't universal, meaning that it doesn't precisely reflect physical mechanisms. If the DCNH theory has any validity at all, then it only affirms the notion that more variability exists among people typed the same way than what Model A implies, meaning that Model A has serious shortcomings.
As far as Model A is concerned, we don't know if statistical significance is really significant because the Psychology-Biology gap prevents us from accounting for underlying causes.
In contrast, behavioral psych and social science experiments frequently attempt to bypass the Psychology-Biology gap by accounting for physical factors such as genetics and upbringing, so they at least service explanations we can consider scientific.
I’m going to be spending Christmas with my boyfriend in the countryside outside of Tokyo from 4pm JST tomorrow, and my internet connection might be spotty even with my 4G router, not to mention I need to pay attention to him. So I need to leave you ******s alone for a couple days. Don’t hurt yourselves too much in my absence.
Somehow I'm itching to bring back my "rhombic triacontahedron" and "astrocomical signs" projects back to the drawing table.
MOTTO: NEVER TRUST IN REALITY
Winning is for losers
Sincerely yours,
idiosyncratic type
Life is a joke but do you have a life?
Joinif you dare https://matrix.to/#/#The16Types:matrix.org
Well this "AI" sort of works in the same way as DNA has knowledge: through blind and unguided trial-and-error. The DNA doesn't "know" or understand laws of physics, it's just that whatever organism that didn't act according to the laws of physics have died, and the DNA died with it. And the organism with the correct information of laws of physics have survived. In the same way, the AI doesn't "know" anything, let alone it understands or can explain laws of physics.
When this AI creates something that "works", then it's because we already know the correct laws of physics that is required in order for something to "work". So this AI can't tell us anything more than what we already know about laws of physics, which we must program into the virtual environment for the AI. The AI can't work outside of the laws of physics that we give to it. The AI might, through bruteforcing, blind trial-and-error, may come up with some new ways of coming up with the most efficient way of doing something, which we hadn't considered before. But it cannot tell us any new laws of physics, as it can't explain "what's there", in the same way that human beings can with our explanatory knowledge. It won't become "self-aware" and start figuring out that the AI is trapped inside of some sort of a computer program.
What do you mean?
@Singu
”What do you mean?”
I explained myself in the same post. You uncleverly tried to misrepresent the link I posted by saying “here’s literally the rest of the article”, quoting only a tiny fraction of the extremely long article which has lots and lots of varied information. It makes you appear as if you’re more focused on looking right than you are on trying to inform, share and build on knowledge. It’s anti-informational, and shady as fuck.
Also if you were writing an academic paper, you saying that would constitute a form of plagiarism.
Well you're right, I didn't realize that the article actually had more stuff down below. But it doesn't really matter, because it didn't really contain any relevant information.
Disregarding all of the other wrong information in your post..
Seeing as only a small minority of humans can actually do this, this is not a useful or valid way to define humanity.
It is far more ostensible that supercomputers will be able to do this sooner than the average human can, at this rate.
That was an idea for the second half of what you asked, and part of the first.
According to you this is what defines humanity. Do you have any idea? Or are you just a shitter version of a robot?
I think I would do the same thing current AI developments have been doing such as the video example I posted: Allow it to sense and process information creatively. Program it to build on data, yet take out rules.
My whole point was that we still don't know how people come up with explanations for things, and that is why we don't have "real" AI yet. Once we figure that out somehow, then we can start programming a "real" AI immediately.
I already told you that the AI in the video isn't actually doing anything creative, it's just blind and unguided trial-and-error in the same way that the DNA has evolved over time with blind and unguided trial-and-error. Neither the AI nor the DNA "knows" or understands how anything works.
Some “irrelevant” information from my link, @Singu :
Computer scientists at DeepMind in London have developed what they call interaction networks. They incorporate an assumption about the physical world: that discrete objects exist and have distinctive interactions. Just as infants quickly parse the world into interacting entities, those systems readily learn objects' properties and relationships. Their results suggest that interaction networks can predict the behavior of falling strings and balls bouncing in a box far more accurately than a generic neural network.
Another one:
Geoffrey Hinton, a pioneer of deep learning at the University of Toronto in Canada, agrees. "Most of the people who believe in strong innate knowledge have an unfounded belief that it's hard to learn billions of parameters from scratch," he says. "I think recent progress in deep learning has shown that it is actually surprisingly easy."