Page 2 of 2 FirstFirst 12
Results 41 to 43 of 43

Thread: The Dimensionality of Functions

  1. #41
    Glorious Member mu4's Avatar
    Join Date
    Aug 2007
    Location
    Mind
    Posts
    8,173
    Mentioned
    760 Post(s)
    Tagged
    3 Thread(s)

    Default

    IMO Dimensionality as discussed is kind of like levels of simulation/imagination/calculation an individual can achieve, and I don't think of it as something that may exist in all individuals. Imo the way it is presented fits with the different levels of math a mind (conscious or unconscious) can use to compute solutions.

    1d = experiential, it is basically not something that we can process without prior experience and only imaginable from memory. (very limited calculations - no computation)
    2d = social norms/normative, these are rigid rules that are communicated to us due to upbringing, direct experience is not necessary only the communication of the norm (few variables calculations - arithmetic)
    3d = situational thinking, here norms can be combined and exceptions to the rule can be identified, here is where thinking becomes flexible to fit exceptional situations (many variable simulation - time/algebraic)
    4d = universal thinking, time invariant, here ideas are crystallized into universals and new norms and rules can be created to handle existing norms and exceptions (holistic/change simulation - calculus/matrix math)

    It's possible that there are many individuals who cannot achieve full dimensionality of thinking with even their strongest elements and it's possible that individuals can achieve greater dimensional of thought with a 1d function as well. The thing about models and theories here is these are macroscopic categorizations vs causal identification. The observations are not limitations and can be broken although there is likely a high cost to do it. You can fill a river canyon or move a mountain with a spoon, but it might be a onerous undertaking.


    I think dimensionality is interesting in another way as well, not as a guess into the human mind but as a model for the production of a artificial one.

    Nobody knows this right now, but if one were creating a AI one might create these sort of broad computational function levels to divide up certain classes of information processing.

    AI's can probably be programmed to differentiate themselves based in information preference and this would be one mechanism to create that individualization, now how influential this is in regards human complexity is up for debate.

    To me this is a interesting way to look at information preference and a means to generally organize the strength of information preference within a information metabolism, whether it is human or artificial. As long as this idea can create a mind that's human like, then it's probably worthwhile study. It's like like our ability to make a car or locomotive is worthless because we can't simulate 100% of a horse.

    IMO, there is two things that this idea has. One (the organization of a type of generalized and individualized mind), the other, a guess into organization of a the human mind.

    A lot of AI work is right now focused on getting them to simply work, but imo once they get that part of the process fairly good, they will move on to individualizing AI's and perhaps dimensionality(levels of function access) of information preference would be a way for AI's to individualize.

  2. #42

    Join Date
    May 2012
    TIM
    LII
    Posts
    28
    Mentioned
    2 Post(s)
    Tagged
    0 Thread(s)

    Default

    It is like the hardware of a computer system. Although eventually a very slow computer might produce a better result than a fast computer, it would have taken longer and "more strenuous" of an effort.

  3. #43

    Default

    That's really interesting to me (I'm a programmer by profession). Have you seen Entropica?

    I agree completely with the utility for an artificial mind, bird don't fly like planes and it's better that they don't. The artificial nature of socionics models might render them far more useful practically (for robots) then an organic mind.

    And I like the idea of socionics as a key part in AI development. I view socionics as information exists, information is exchanged and information is processed. Which IMO can easily be translated and used in AI development. In fact, AI development might shed some light on how people function and improve socionics (give it a more easily and practically achievable testable aspect).

    Regarding the dimensionality of functions I find the description of the one dimensional functions to be spot on and accurate for myself. The descriptions of the phobias, fears and consequences of not being good at something and either wanting it or not wanting, as described here. For example, I'm a computer programmer yet I seem to have whatever IE that entails as a one dimensional informational element. This leads to me being practically unable to do my job and extreme stress when any sort of real world competence is expected from me. Much like you state, I do deliver the goods (more or less) but it's an onerous undertaking for me, digging that mountain with a spoon. And no matter how hard try I can't learn or get any better at it. And I have been trying really hard and have all the motivation in the world. And yet I keep having the same problems, making the same mistakes.

Page 2 of 2 FirstFirst 12

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •