Quote Originally Posted by Rebelondeck View Post
As I have said many times, type structure has to be be independent of physical structure; the app can run on IOS or android and both operating systems would likely produce the same outcomes but not in an identical fashion. Type must be part of a so-called central processing structure (likely integral to its kernel); otherwise, one could end up with a N-type visual processor and a S-type audio processor; and if this is possible, say goodbye to Socionics theory......

a.k.a. I/O
IOS and android devices both have the same basic hardware.

Yeah, I think it's possible to have such decoupling of modules. Model A is only good as a general overview of some patterns and ideas, you can't *directly* draw from it any specific concrete conclusions. To make such a conclusion, you always have to add other reasonings in the process. And Model A cannot determine those additional things on its own. So yeah, goodbye to it in that sense, but that's a well-known thing I would think.

And to get more specific about your post, I don't think any of the general overview Model A provides can determine how the processing structure works in such detail, down to these modules you mention for examples. The model is too lightweight for that, it does not have enough information to explain the workings of the brain down to that detail.