Some people accept them, some people don't. And some people are simply skeptic.
I've been before (at least partial) supporter of them, but I actually think they're highly questionable.
Something always "floated" in my mind is their number. We need only four (2^4 = 16) dichotomies for stablishing 16 types. Now we have an alternative model which claims to be equivalent to the original (producing the same types in same number, 16) despite being based also in dichotomies (2), having an higher number of them, and being this an odd number (15).
I'll use a math analogy for explaining why (in my opinion) this does not make sense. Let's consider personalities are points; the original 4 dichotomies would consitute the orthogonal vector basis for a 4-dimension space. There would be 16 "polarized" combinations, archetypes, limit cases or whatever denomination. Mathematically the orthogonal basis is only the easiest one, but there's nothing that makes it more correct than any potential alternative.
But if we assume the 16 polarized combinations are all that could be (not a subset), then the space is necessarily 4-dimensional.This implies that the set of linearly independent vectors (dichotomies) for any basis has 4 elements, not less neither more. If we add more vectors to the basis, they they would not be linearly independet, so some of them would be redundant.
Starting with 4 elements, I can create whatever algorithm for producing whatever combination, like has happened with RDs. But this does not imply that the product has to be meaningful. If we assume that the 16 types produced by RDs are the same that the 16types produced by jungian ones, then the set of 15 RDs and the set of 4 JDs have the same amount of information. And as JDs are included in RDs, 4 JDs and 11 (RD-JDs) are equivalent. But as cardinality of RDs-JDs is still > than cardinality of JDs, the 11 (RDs-JDs) are not linearly independent...
- If models are equivalent (same 16 types) then whatever information I can obtain with RDs is contained in JDs. So they're unnecessary. Although there are rules for obtaining equivalence Model A <> RDs, they're at least internally redundant.
- If we assume all RDs are meaningful (linearly independent) then both models are not equivalent, and we would have 2^15 combinations (types). But then the algorithm which creates them from original ones should be ignored.
I do not think that no idea from them is useful. I can always think in a property X, like for example, positivism, and observe a if a particular user has a trend for manifesting this property as (+) or (-). But this can only be done user by user. No rule which correlates RDs with types is valid, due to their internal redundance, if we at the same time see RDs as different, independent, properties.
That was the Ti critique. Now the Te critique.
(As far I know), the first strong supporter for RDs was Mr Gulenko. He spreaded them through his cogstyles, which I personally think they are even worse. They're deduced from questionable ideas (RDs) , and his methodology is dreadful. He has sometimes expressed critiques to his colleagues because they're interested in "proving everything" whereas he's not. And he used this as an "evidence" of how bad CD cogstyle is compared with others.
How ironical the examples he provided of bad "CD behavior" are not ILEs. Descartes (circular reasoning in his "proof" of god), is a clear-cut LII, not ILE (ba dum tss). And that guy Skinner is not one, in my opinion. Rejecting (or at least ignoring) mental states does not fit well in an intuitive, less if combined with Ji (mental... state... mind... static...). Also ILEs are usually rebel (as EPs in general). Behaviorism is far from being someting they would (statistically) support. I will not say which type I think Skinner is, because I do not want to derail the thread with that discussion.
The same Reinin described LIIs as one of the most difficult minds for being changed...
Now let's take a look to the characteristis of HP cogstyle. This one is not described as the best (for Gulenko it seems to be DA), but also there's no evident bad characteristics in it. Only "others don't understand" because it condensates a lot of information in packs. The perfect cogstyle for justyfing his bad methodology. I mean, not that HP is bad, but he can use it as an apparent justification for not needing proofs, which is not true.
Nobody disagrees that the whole > the sum of its parts. You can create a model step-by-step, deducing, testing and combining small portions (what he calls CD). Or you can alternatively construct the whole, complete, model first, for explaining a lot of things. But this model has to be submitted (at least) to the same amount of tests.
An analogy. Let's say we have a methodology capable of measuring one unity of information each time. So the CD user proposes portions of this size, and tests them, proposes, tests.... The HP user prefers to imagine the whole thing. He/she created a single "super-entity", so to speak. but as it's a whole, not a set or related portions, he/she thinks "one thing, one test". Wrong, wrong, wrong. The model is size-10 (for example), you cannot assume one sucessful size-1 test proves it.
In fact, you cannot never fully prove it, because every test offers solutions which are valid in particular experimental conditions. Outside these conditions, we don't know. We cannot fully know, only know more/better than before...
Well, this can go to a full debate about scientific methodology, and that's not exactly my goal. But what I want to say is that there's a trend in pseuociences like this (and out of them) in thinking "I've imagined it, so it's automatically true". When a concept is described in a way that "makes sense" it's asumed as valid, despite it has not been properly tested or worse, it cannot be tested (and if it's a pseudoscience, concepts have high chances for not being compatible with tests).
High doses of skepticism are required.
Reinin dichotomies are quite bad conceptualized, their justification is questionable. So are cogstyles. Yet many people easily accept them. It seems to me, I've observed sometimes... does not suffice. The fact that in one situation an idea seems to work does not imply it's true, or functionally true. There are a myriad or potential conditions, known and unknown.
K0rpsy reminded links for maths behind RDs:
I haven't read them for a long time, so I recognize here my fault. Many appreciations about linear dependence and so are considered in them. I think the point is still valid, because it has not be questioned that they're mathematically correct (that a valid algorithm can construct them), but that they're meaningful (and therefore useful). The maths in this post does not seem to contradict what the links express, so apparently everything is still OK.