Originally Posted by
Adam Strange
Well, I considered it. I’ve talked about this elsewhere on this forum, but I can repeat myself here.
As I understand it, Socionics is an attempt to understand human behavior in terms of isolated functions named Ti, Te, Fe, Fi, Se, Si, Ne, and Ni. This is a fairly successful approach to breaking down and modeling behavior.
One could look at a bunch of ants swarming over an area of ground, looking for food and avoiding enemies, and wonder if each ant has a model of the terrain, or if the behavior of the swarm is driven by simple rules.
Early robots were programmed to move a part from one place to another. This required that they have a map of the area in memory, and presumed that the thing that they were moving existed and could be grasped. But this model quickly ran into problems in the real world, where things are changing all the time.
Programmers were forced to take a different approach when faced with interacting with an unpredictable world. Instead of commanding the robot to go to point (x1,y1,z1) at time (t1), then close the actuators, and then move to point (x2, y2, z2) in time Δt when there might be no part at point (x1,y1,z1), it would be better to give the robot some method of sensing the environment, along with the ability to recognize a part and search for a moved part, the ability to decide on the importance or relevance to the task of the parts that it does find, and options for what to do when it does not find a part.
These abilities enable a robot to function in a world where not everything is where or what it expects it to be. You can break down the functions as subroutines of a larger program in the following manner:
1. Se: Sense the world, possibly by camera (sight), collision detection (touch), sounds (hearing), etc.
2. Si: Sense internal states, like battery charge (hunger), circuits working properly or not (pain), motor drives operating normally or in overcurrent sensing (comfort), light too bright or too dim so adjust system gains, etc.
3. Te: Identify the separate parts in the sensory data. Is a “part” detected, and is it separate from the background? Name the parts. Is this part relevant to the task or not?
4. Ti: Does the identified part fit into an internally stored class of objects?
5. Fi: Assign a “value” to this part, with respect to the task at hand. Is this “part” something which should be moved or avoided? Is the part broken?
6. Fe: If there are other robots operating in the area, how should their actions be coordinated? What are the communication protocols? Which robots should take priority when both share a common assignment? How are priorities negotiated?
7. Ne: What alternate actions can be taken in any situation? If a part has been moved or cannot be found, what other actions are possible?
8. Ni: Given a list of possible alternate actions in any situation, which single action is best pursued under the present conditions?
From this simple list of functions, it is pretty easy to see how almost any behavior can be modeled. With a hierarchy of subroutines, you can model behavior of almost any complexity you wish.
For example:
The “ant” runs it’s six leg-motor subroutines and moves forward out of the nest, in search of food. It bumps into a blade of grass with its antennae and the leg subroutines stop, to be replaced in sequence by “back up” and “turn” and “go forward” again. When the ant’s sensors detect the chemical traces of a dead fly, separated from the many other chemicals it is continuously sensing, the motor subroutines are overridden to turn towards the antenna with the stronger signal. The ant eventually either comes to the dead fly, at which point it identifies it as something classified as food, grasps it (more motor subroutines) and returns to the nest, or if the fly has been moved, it keeps doing a semi-random search until it gets low on sugars. The ant can find its way back to the nest without knowing where the nest actually is, since it and every other ant left a pheromone trail as it left the nest (another subroutine running intermittently), so it only has to encounter one of these many trails and follow it to find its way back to the nest.
Basically, almost any behavior can be simulated with the above eight basic functions that are serving a master task-achieving program. You can build hierarchies of these task-achieving programs, with each one having the ability to override other task-achieving programs, depending on the situation. For example, I normally like to eat and need to excrete, but I don’t allow the sequence of motor controls which control these operations to activate at any time. The operational importance of these subroutines is continuously being adjusted by internal sensing (hunger, need to pee) and social propriety and situational opportunity and aren’t always under my intentional control.
Some individuals are going to have stronger arms, stronger legs, be taller or shorter, thinner or fatter, etc., than other individuals. So, too, will some of their basic functions be stronger or weaker than other individuals. This creates the “types”.
Personally, I’m not sure why some functions seem to go together (F & T, S & N), or why it happens that some functions inhibit others (Ti & Te), but there is probably a reason for this.
My guess is that this inhibitory effect exists because of the operational limitations (bandwidth) of the human brain as a heat-limited information processor. This is why you can see great detail in the center of your field of view with very slow updates, or sense coarse motion at the periphery of your visual field very quickly. The Information content of (fine_resolution)x(small_area)x(slow_updates) = Information content of (coarse_resolution)x(large_area)x(fast_updates).
It may not be possible to do Te "feature isolation and recognition" simultaneously with Ti "feature internal compare and classification". Both are extremely processor-intensive.
Anyway, that's my opinion of the "validity" of Socionics.