Page 1 of 4 1234 LastLast
Results 1 to 40 of 151

Thread: an example of Ti vs Te

  1. #1
    Robot Assassin Pa3s's Avatar
    Join Date
    Mar 2010
    Location
    Germany
    TIM
    Ne-LII, 5w6
    Posts
    3,629
    Mentioned
    46 Post(s)
    Tagged
    0 Thread(s)

    Default an example of Ti vs. Te

    I think I have found a pretty convincing example to show the differences between Ti and Te. Actually, it now seems almost obvious to me and I wonder why I have never thought about this possibility before. However, the example I’m thinking about are the two ethical systems Utilitarianism and the Categorical Imperative.

    Both are designed to help people to choose the morally right option out of the various possibilities they have. But they work in quite different ways.

    I’d say that the Utilitarianism closely matches the Te approach.

    First off, a short explanation of the Utilitarianism: “Utilitarianism is an ethical theory holding that the proper course of action is the one that maximizes the overall "good" of the greatest number of individuals. It is thus a form of consequentialism, meaning that the moral worth of an action is determined by its resulting outcome. Utilitarianism was described by Bentham as "the greatest happiness or greatest felicity principle".” (taken from Wikipedia)

    That means Utilitarianism is focused on the object and depends on the circumstances. Not your intention is important if you do something, but the result instead. This can also be translated into “The end justifies the means.” Te is often (but not exclusive of course) connected with efficiency, you can find this in the “utility calculus”, which is necessary to find out which option brings the most benefit to the greatest amount of people. Utilitarians try to encompass every consequence of your actions, it’s clearly outwardly focused.

    In practice, there might be situations in which you have to do a morally questionable action in order to do a greater good. You can see this as example: Shoot the planes if they’re hijacked by terrorists kill innocent people to save many more innocent people? Or let it be and let the terrorist reach their goal because you’re never allowed to kill people? But from a utilitarian perspective, you can shoot it (given the right circumstances). The loss of the passengers is unfortunate, but if they fly into the building, not only they are dead, but hundreds of other people too. (Another example: Can I torture a criminal to find out where he placed a bomb?)

    On the other side, the Categorical Imperative would be connected to Ti, imho.

    “According to Kant, human beings occupy a special place in creation, and morality can be summed up in one ultimate commandment of reason, or imperative, from which all duties and obligations derive. He defined an imperative as any proposition that declares a certain action (or inaction) to be necessary.” […] “A categorical imperative, on the other hand, denotes an absolute, unconditional requirement that asserts its authority in all circumstances, both required and justified as an end in itself. It is best known in its first formulation:
    "Act only according to that maxim whereby you can, at the same time, will that it should become a universal law." (Wikipedia again)

    Immanuel Kant criticizes the Utilitarianism because it can easily go wrong. All your goals and motivations you take as a basis for your decisions are basically hypothetical. (e.g: It’s not at all sure that you’ll be promoted of you work very hard.) Kant allows the so called “hypothetical Imperative” but only if your actions to reach this goal are good in itself. As I said, for an Utilitarian, it is sometimes okay to do a wrong thing if many people have a clear benefit from that. But Kant never allows anything which is bad in itself. As you can see, the CI is much more concerned with the internal, logical consistency of your actions and not how much good you do in reality. You can almost say it disregards the overall implications of your actions and focuses only on you not doing anything wrong (inwardly oriented).

    Of course, in an ideal world, in which everyone acts according to the law written above, nobody would do anything bad on purpose and the problem would be solved.

    What do you think about this idea?
    „Man can do what he wants but he cannot want what he wants.“
    – Arthur Schopenhauer

  2. #2
    not gonna be around as much anymore
    Join Date
    Oct 2010
    TIM
    C-IEE
    Posts
    1,255
    Mentioned
    3 Post(s)
    Tagged
    0 Thread(s)

    Default

    Interesting. And could be correct. Though, it's not all cut-and-dry in application to type. Because while I am a Te-valuer, I am also an Fi-Valuer. So to say I would always take the most efficient route-- i.e. save the most lives, even if it means taking a few-- is not necessarily the case, since I am also guided by my Fi-values which are essentially the same as Ti, only dealing with ethics rather than logic.

    Hope I'm making sense, don't have much time to explain myself better.

    Where Te seeks efficiency and reaching the most profitable end, Fe seeks to make the most people happy.

    Where Ti adheres to a strict logical standard, Fi adheres to an equally-strict ethical ideal.

    Te/Fi, then, seeks to reach the most profitable end, but within the ethical limits set by Fi.

    Fe/Ti seeks to make people happy, but a definition of "happy" as defined by a Ti-based standard.
    My life's work (haha):
    http://www.the16types.info/vbulletin/blog.php?b=709
    Input, PLEASEAnd thank you

  3. #3
    Robot Assassin Pa3s's Avatar
    Join Date
    Mar 2010
    Location
    Germany
    TIM
    Ne-LII, 5w6
    Posts
    3,629
    Mentioned
    46 Post(s)
    Tagged
    0 Thread(s)

    Default

    Well, actually you might be right about that. While I wrote this, I sometimes thought Utilitarianism would fit better to Fe and The CI to Fi.
    „Man can do what he wants but he cannot want what he wants.“
    – Arthur Schopenhauer

  4. #4
    InkStrider's Avatar
    Join Date
    Aug 2011
    Posts
    419
    Mentioned
    1 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by MegaDoomer View Post

    I’d say that the Utilitarianism closely matches the Te approach.

    First off, a short explanation of the Utilitarianism: “Utilitarianism is an ethical theory holding that the proper course of action is the one that maximizes the overall "good" of the greatest number of individuals. It is thus a form of consequentialism, meaning that the moral worth of an action is determined by its resulting outcome. Utilitarianism was described by Bentham as "the greatest happiness or greatest felicity principle".” (taken from Wikipedia)

    That means Utilitarianism is focused on the object and depends on the circumstances. Not your intention is important if you do something, but the result instead. This can also be translated into “The end justifies the means.” Te is often (but not exclusive of course) connected with efficiency, you can find this in the “utility calculus”, which is necessary to find out which option brings the most benefit to the greatest amount of people. Utilitarians try to encompass every consequence of your actions, it’s clearly outwardly focused.

    What do you think about this idea?
    I can't speak for Ti, but I like Utilitarianism for Te. The philosophy weighs the potential costs with potential benefits and makes decisions based on that. I tend to think that the ends does justify the means, but only if it really does. In many cases, if we were to look closely, it doesn't.

    Utilitarianism claims that the action that maximizes the overall "good" for the greatest number of individuals is the better action. Sounds good enough, except that the view of what is "good" is subjective, since what is "good" is tied to what we value.

    ETA: This reminded me of a debate I had some years back with a Delta NF and SLE. The scenario was this: Say you were mountain climbing with a bunch of close friends, and all of you were tied together to the same rope for safety. An avalanche occurred and all of you are caught hanging off a cliff. The people at the top of the cliff were unable to pull all of you up due to the weight. You are the final individual at the end of the rope. You and the friend above you have a knife in your pockets. Will you choose death and cut off the rope so that everyone may live?

    Delta NF and I voted yes. SLE insisted she trusts that friend above would not cut off the rope and let her die.

    Note: I am not speaking of the morality of the issue here, but the differences in thought processes which resulted in such.
    Last edited by InkStrider; 09-22-2011 at 04:54 PM.

  5. #5
    c esi-se 6w7 spsx ashlesha's Avatar
    Join Date
    Aug 2010
    Location
    the center of the universe
    Posts
    15,833
    Mentioned
    912 Post(s)
    Tagged
    4 Thread(s)

    Default

    thats why it would work well with Fi determining whats good.

    not that i necessarily agree with the premise, just a thought.

  6. #6
    an object in motion woofwoofl's Avatar
    Join Date
    Apr 2011
    Location
    Southern Arizona
    TIM
    x s x p s p s x
    Posts
    2,111
    Mentioned
    329 Post(s)
    Tagged
    2 Thread(s)

    Default

    Utilitarianism is awesome as hell and totally my thing I can imagine no valid alternative...

    From what I know so far, Kant has some absolutely awful and ridiculous ideas... why would anything ever need to be universally applicable? How would any approach to anything be consistent enough to be universal in the first place when the situation on the ground is never the same, therefore changing up the approach itself each and every time? What could be more disconnected from an organic, changing world than icy, lifeless consistency? Why try to shoehorn the latter into the former in the first place?
    p . . . a . . . n . . . d . . . o . . . r . . . a
    trad metalz | (more coming)

  7. #7
    Haikus
    Join Date
    Apr 2009
    Posts
    8,313
    Mentioned
    15 Post(s)
    Tagged
    0 Thread(s)

    Default

    So in other words, Ti is retarded. Gotcha.

    In theory, extroversion is active, affective, and based on application, which is why theoretically, extroverted functions will run on a set of general principles by the same weight as they rely on common sense, tactics, and intuition to pull some material affect through. Introversion is patiently thoughtful and based on preparation, so theoretically, introverted functions run on things like analysis, philosophy, and strategy, moreover thought and contemplation.

    Personally even if I were to define Te and Ti in these terms, I'd go about it more realistically. Something along the lines of Te is problem solving, Ti is problem simplifying. Te forms general rules and ideas to get things done, Ti conducts analyses for specific phenomena. This is at least how they should be stereotypically defined, but then again I'm not much of a socionics fiend.
    Last edited by 717495; 09-23-2011 at 08:45 AM.

  8. #8

    Join Date
    Aug 2010
    TIM
    ILE
    Posts
    100
    Mentioned
    1 Post(s)
    Tagged
    0 Thread(s)

    Default

    I think this is a good example of how to differentiate between Ti and Te. As you've explained utilitarianism is heavelly Te-oriented and vice versa.

    I don't think it's valid however to assume that Te- or Ti-valuers will agree with respective ethical philosophy or any ethical philosophy at all for that matter...

    Kant and Bentham are frying in hell...

  9. #9
    Park's Avatar
    Join Date
    Oct 2005
    Location
    East of the sun, west of the moon
    TIM
    SLI 1w9 sp/sx
    Posts
    13,710
    Mentioned
    196 Post(s)
    Tagged
    0 Thread(s)

    Default

    I'd probably lean towards the Utilitarianism point of view, since with me it's always about context and why something is done, rather than what the action itself represents. Having said that, if my goal is clear and I am convinced that the positive consequences will outweigh the negative ones, I wouldn't hesitate acting in a morally ambiguous/questionable way. The only potential problem would be my confidence in properly evaluating the situation and making the right judgment ahead of time. But like they say, you can't make an omelet without breaking some eggs, so the risk factor is often inevitable.

    I do believe in consistency of actions, and staying true to one's beliefs and principles. I detest people who compromise themselves and easily adjust/shift their inner values to external influence or pressure.

    There are some things I'd never do, like certain forms of cheating, deceit, dishonesty, misuse of power and authority, compromising a relationship for personal benefit, etc. but these are still more of a process-oriented tasks, rather than single concrete actions that you can evaluate outside of context.

    I guess my point is, while the morality of my actions is mainly contextual, I still choose my paths carefully. Not sure if that makes sense, but hopefully gets my point across.
    Last edited by Park; 09-22-2011 at 06:15 PM.
    “Whether we fall by ambition, blood, or lust, like diamonds we are cut with our own dust.”

    Quote Originally Posted by Gilly
    You've done yourself a huge favor developmentally by mustering the balls to do something really fucking scary... in about the most vulnerable situation possible.

  10. #10
    Crispy's Avatar
    Join Date
    Sep 2009
    Posts
    2,034
    Mentioned
    18 Post(s)
    Tagged
    0 Thread(s)

    Default

    Utilitarianism is not about omnipotence. If you knew the consequences for sure, then of course it would be the better philosophy. But knowing the consequences is impossible for any event. Utilitarianism is about creating a case-specific algorithm in your head and using it to find the perceived best solution, even if the action taken would be universally identified as wrong (meaning if EVERYONE solved the problem the way you did, as a general rule of thumb, even bigger problems would arise). Categorical Imperative on the other hand tries to justify not doing the perceived best action for the specific event rationalizing that if everyone followed suit, chaos would ensue.
    ILI (FINAL ANSWER)

  11. #11
    Robot Assassin Pa3s's Avatar
    Join Date
    Mar 2010
    Location
    Germany
    TIM
    Ne-LII, 5w6
    Posts
    3,629
    Mentioned
    46 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Parkster View Post
    I guess my point is, while the morality of my actions is mainly contextual, I still choose my paths carefully.
    I guess I have a similar opinion. While the CI would be ideal if everyone would act according to it, it's not too well suited for reality. In daily life, it's of course better to avoid harming anyone, but sometimes, compromises must be made. (For example if a part of your land is needed for a technical facility for the whole town.)

    @Crispy: Well, I didn't wanted to say that the users of Utilitarianism are (or should be) omnipotent in any way. That's exactly the problem as you said. The goal is hypothetical and if you harm someone in oder to create a greater good but fail, the harm was still done and nobody benefited.
    „Man can do what he wants but he cannot want what he wants.“
    – Arthur Schopenhauer

  12. #12
    Crispy's Avatar
    Join Date
    Sep 2009
    Posts
    2,034
    Mentioned
    18 Post(s)
    Tagged
    0 Thread(s)

    Default

    Oh I was just talking aloud, not at OP
    I'm only saying it because when I learned about both in an ethics class, they were representing Kant as a failboat who thought white lies were ALWAYS wrong, and utilitarianism as the only one trying to maximize happiness.
    ILI (FINAL ANSWER)

  13. #13
    Coldest of the Socion EyeSeeCold's Avatar
    Join Date
    Oct 2010
    Location
    Holy Temple of St. Augusta
    Posts
    3,682
    Mentioned
    6 Post(s)
    Tagged
    0 Thread(s)

    Default

    I'd probably say Utilitarianism, in practice, could describe Base Te(or rather Je in general) manifestations in a person's motivations, but it ultimately breaks down when you consider that Introverts are primarily connected to Self, and also that Base Pi would more likely consider the fact that even in Utilitarianism, not everyone gets to be "happy" or benefits from the "overall good", which means it's still flawed as a theory and is questionable because the selves compose the whole and not the other way around.
    (i)NTFS

    An ILI at rest tends to remain at rest
    and an ILI in motion is probably not an ILI

    31.9FM KICE Radio ♫ *56K Warning*
    My work on Inert/Contact subtypes

    Socionics Visual Identification(V.I.) Database
    Socionics Tests Database
    Comprehensive List of Socionics Sites


    Fidei Defensor

  14. #14
    Hot Scalding Gayser's Avatar
    Join Date
    May 2007
    Location
    The evolved form of Warm Soapy Water
    TIM
    IEI-Ni
    Posts
    14,905
    Mentioned
    661 Post(s)
    Tagged
    2 Thread(s)

    Default

    I agree with you, OP.

    On the downside, it makes me want to be a cunt to Te/Fi valuers even more.

  15. #15
    ■■■■■■ Radio's Avatar
    Join Date
    Aug 2011
    Posts
    2,571
    Mentioned
    154 Post(s)
    Tagged
    0 Thread(s)

    Default

    Interesting.

    I find my approach towards moral and ethical troubleshooting to be deeply utilitarian. Not sure if that has a correlation with type; it would be interesting to see if there is some sort of a consensus.

  16. #16
    Ti centric krieger's Avatar
    Join Date
    Sep 2006
    Posts
    5,937
    Mentioned
    80 Post(s)
    Tagged
    0 Thread(s)

    Default

    In practice, there might be situations in which you have to do a morally questionable action in order to do a greater good. You can see this as example: Shoot the planes if they’re hijacked by terrorists kill innocent people to save many more innocent people? Or let it be and let the terrorist reach their goal because you’re never allowed to kill people? But from a utilitarian perspective, you can shoot it (given the right circumstances). The loss of the passengers is unfortunate, but if they fly into the building, not only they are dead, but hundreds of other people too. (Another example: Can I torture a criminal to find out where he placed a bomb?)
    shooting the plane is not incompatible with the CI either, since the universal principle you uphold could just as well be "save people by all means necessary".

    and yeah, you can make the CI do the same thing as utilitarianism by specifying the conditions of the law to extreme precision. not always necessary, though (like in this example). general point: the two are pretty often in agreement.

    i don't think Ti valuing types generally have a problem with "means to an end" reasoning. take Lenin, Marx, Stalin and the like for example.

  17. #17
    Robot Assassin Pa3s's Avatar
    Join Date
    Mar 2010
    Location
    Germany
    TIM
    Ne-LII, 5w6
    Posts
    3,629
    Mentioned
    46 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by labocat View Post
    shooting the plane is not incompatible with the CI either, since the universal principle you uphold could just as well be "save people by all means necessary".
    Save people by killing people? The successful rescue of the civilians in the buildings, which are the targets of the terrorist, is hypothetical. Maybe it was a mistake and there are no terrorists involved. As far as I know, you aren't allowed to shoot the plane in any case according to the CI because this would always involve killing people which is in itself a bad thing.
    „Man can do what he wants but he cannot want what he wants.“
    – Arthur Schopenhauer

  18. #18
    Ti centric krieger's Avatar
    Join Date
    Sep 2006
    Posts
    5,937
    Mentioned
    80 Post(s)
    Tagged
    0 Thread(s)

    Default

    no, the CI necessitates that your rules are universal and consistent. it does not say anything about the specifics of which rules you apply. "never kill" does not have to be among your rules. if your rule is "in any situation act so as to make the largest number of people live" you end up with a justification for killing in order to save others.

    if there are uncertainties involved you probably have to apply some sort of probabilistic calculation. i don't think this changes the essence of the problem. in the end you still end up needing to judge the outcomes and translate them into weights in the mathematical problem. but most people then just think "too complex" and use a heuristic instead (which also makes sense from the point of view of imperfect epistemics).

  19. #19
    Robot Assassin Pa3s's Avatar
    Join Date
    Mar 2010
    Location
    Germany
    TIM
    Ne-LII, 5w6
    Posts
    3,629
    Mentioned
    46 Post(s)
    Tagged
    0 Thread(s)

    Default

    ^ I think this sounds much more like Mill's Rule Utilitarianism rather than CI. I'm not an expert in this and it's been a while since I read about it, but there is no mathematical evaluation of utility in the CI. Mill wanted to make the once applied rules universal to create consistency.

    I've found this part in the german Wikipedia:
    Im Gegensatz zum Regel-Utilitarismus, bei dem Handlungsregeln nur nach dem Nutzen bewertet werden, den sie hervorbringen, [...] ist der kategorische Imperativ deontologisch. Es wird eben nicht bewertet, was die Handlung bewirkt, sondern wie die Absicht beschaffen ist. Wenn der Wille gut ist, dann ist auch die Handlung moralisch gerechtfertigt. Der Wille zum Guten allein ist das, was moralisch gut ist.
    Which means: The CI, in contrast to the Rule Utilitarism, is deontological. That means not the outcome is important for a morally good action, but the goodwill behind the action. If you want to do good, the action is morally justified. Because talent, money and advantageous conditions can be used for "evil" purposes, the goodwill is the only thing which can actually be morally good.
    „Man can do what he wants but he cannot want what he wants.“
    – Arthur Schopenhauer

  20. #20
    Park's Avatar
    Join Date
    Oct 2005
    Location
    East of the sun, west of the moon
    TIM
    SLI 1w9 sp/sx
    Posts
    13,710
    Mentioned
    196 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by Crispy View Post
    Utilitarianism is about creating a case-specific algorithm in your head and using it to find the perceived best solution, even if the action taken would be universally identified as wrong.
    Sounds good to me.

    Quote Originally Posted by MegaDoomer View Post
    The goal is hypothetical and if you harm someone in order to create a greater good but fail, the harm was still done and nobody benefited.
    Yup.
    Last edited by Park; 09-23-2011 at 04:10 PM.
    “Whether we fall by ambition, blood, or lust, like diamonds we are cut with our own dust.”

    Quote Originally Posted by Gilly
    You've done yourself a huge favor developmentally by mustering the balls to do something really fucking scary... in about the most vulnerable situation possible.

  21. #21
    Ti centric krieger's Avatar
    Join Date
    Sep 2006
    Posts
    5,937
    Mentioned
    80 Post(s)
    Tagged
    0 Thread(s)

    Default

    the rule has to come from somewhere. how do you establish rules without judging outcomes? i think what you're getting at is that a single outcome does not disprove the rule if the rule did apply in (most of) the situations it was learned from. if an action was the best to best extent the person could know then the new outcome of the action does not turn the action from a moral one into an immoral one. the person can not be held fully accountable for his/her imperfect knowledge.

  22. #22
    Banned
    Join Date
    Oct 2005
    TIM
    TiNe
    Posts
    7,858
    Mentioned
    11 Post(s)
    Tagged
    0 Thread(s)

    Default

    There is no connection between Ti and teleology/deontology. However a larger attitude issue afflicting a person's entire psyche can certainly result in Ti being used for teleological purposes when it shouldn't be. Just ask Dick Cheney.... Teleology is about transforming an IM function's content by means of an EM content, regardless of established social boundaries regarding the use of energy and information. It is, of course, completely relative.

  23. #23
    Moderator xerx's Avatar
    Join Date
    Dec 2007
    Location
    Miniluv
    Posts
    8,045
    Mentioned
    217 Post(s)
    Tagged
    0 Thread(s)

    Default

    I'm Ti and definitely more utilitarian. I've also encountered some hardcore Austrian economic adherents (they were Gamma NTs) who love to apply deontological moral arguments against progressive taxation.

  24. #24
    Banned
    Join Date
    Apr 2011
    Location
    State College, PA, USA
    TIM
    SLI
    Posts
    835
    Mentioned
    1 Post(s)
    Tagged
    0 Thread(s)

    Default


    _____
    _____
    _____
    _____
    _____
    _____
    _________________
    _________________
    _________________
    _________________
    _________________


    Sorry, I felt the need to vandalize this thread with ASCII art.

  25. #25
    Creepy-pokeball

    Default

    Quote Originally Posted by BulletsAndDoves View Post
    I agree with you, OP.

    On the downside, it makes me want to be a cunt to Te/Fi valuers even more.
    Helpful

  26. #26

    Join Date
    Mar 2006
    Posts
    1,968
    Mentioned
    2 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by MegaDoomer View Post
    What do you think about this idea?
    I think in general that CI is Ji, but not necessarily Ti. Probably as Kant formulated it, in terms of a system, that's probably Ti oriented, but in a sense Fi and Fe are all about something categorical that you can't argue. In terms of static categorical imperative principles that go "beyond logic"...that seems Fi to me all the way.

    Utilitarianism is probably a Te-oriented philosophy. But "the ends justifies the means" (as typically interpreted) is merely a flawed construction that's not related to type. If it's viewed tautologically, yes, the ends justifies the means, because the ends includes the totality of everything, which includes any moral consequence of the means at all.

    But the way it's applied "in practice," the "ends justifies the means" means that if you pick the ends you think are important, then you can just conveniently forget other consequences of the means (which really are part of the ends, but not the ends you were thinking about).

    This is exactly what dogmatic politicians do when they view a certain end (say, achieving whatever their dogma is) and feel that they can like, cheat, and do whatever they need to do in the service of their dogma (not to mention their power and self enrichment, but that's another thing).

    Incidentally, many of these "ends justifies the means" dogmatists are probably in a Ti/Fe quadra; they have a system or lens through which they see the world, and they make emotional appeals to it, mixed with all kinds of lies and half-truths, which are justified in their view because that helps implement their beloved system.

    Wasn't Robespierre the perfect example of "ends justifies the means"? He felt that it was so important to save the revolution and destroy any possibility of return to the former monarchy that it was okay to kill a whole bunch of innocent people, even many who weren't even monarchists.

    There has been some debate about if he was really LII; maybe he was LSI. But surely you wouldn't put him into some Te type merely because his philosophy was more utilitarian and "ends justifies the means" oriented?

    And then there was ******, that embarrassment to all people typed EIE, who loved the phrase "the ends justifies the means"; he consciously made this his dogma, maybe his categorical imperative. Perhaps nobody else lived by and abused this philosophy more than he did.

  27. #27
    Humanist Beautiful sky's Avatar
    Join Date
    Jan 2009
    Location
    EII land
    TIM
    EII INFj
    Posts
    26,952
    Mentioned
    701 Post(s)
    Tagged
    6 Thread(s)

    Default

    Strong morals are a good thing, unfortunately, we live in a society that doesn't realize that. Morals means living by a consistent moral standard so I think Fi types live in a way in which they feel is consistent with what is right, another words categorical imperative, doing the things that is greatest amount of good for other people (or most people), but that doesn't mean that I would put others ahead of my inner circle, my family, but it means being benevolent and responsible human being (that's the levels created in relations).
    -
    Dual type (as per tcaudilllg)
    Enneagram 5 (wings either 4 or 6)?


    I'm constantly looking to align the real with the ideal.I've been more oriented toward being overly idealistic by expecting the real to match the ideal. My thinking side is dominent. The result is that sometimes I can be overly impersonal or self-centered in my approach, not being understanding of others in the process and simply thinking "you should do this" or "everyone should follor this rule"..."regardless of how they feel or where they're coming from"which just isn't a good attitude to have. It is a way, though, to give oneself an artificial sense of self-justification. LSE

    Best description of functions:
    http://socionicsstudy.blogspot.com/2...functions.html

  28. #28
    Hot Message FDG's Avatar
    Join Date
    Nov 2005
    Location
    North Italy
    TIM
    ENTj
    Posts
    16,806
    Mentioned
    245 Post(s)
    Tagged
    0 Thread(s)

    Default

    It's relatively unfortunate that almost every example of utilitarianism is related to someone being killed in order to save someone's+n lives. You speak about expected utility: its usage it's exactly why utilitarianism cannot be applied to life-or-death situations, namely when the probability of dying is very much above zero, anyone's certain equivalent (meaning, the number of people that have to be saved in order to "make up" what was lost in utility by our own death) shoots towards infinity, because there is no way to clearly determine the worth of one's life - in civilized nations, at least.
    So basically whenever life-or-death is at stake, there is no paretian optimum, and given that the notion of pareto-optimality is a direct extension of utilitarianism, no clear decision can be reached.
    Anyway, IF we exclude life-or-death situations then I'm in favor of utilitarianism in the public sphere (i.e. we don't build a new road because we think it is good to build roads, but rather because the cost-to-benefit ratio is well below 1), but i prefer imperative logic in personal matters (i.e. if I get a birthday gift I find shitty, I'll still like it because your intention was positive).
    Still again, a categorial imperative does not exclude utiliarianism as its subset, you only need to apply some kind of restrictive functional that tells you to drop utilitarianism whenever it conflicts with the imperative.

    ETA: This reminded me of a debate I had some years back with a Delta NF and SLE. The scenario was this: Say you were mountain climbing with a bunch of close friends, and all of you were tied together to the same rope for safety. An avalanche occurred and all of you are caught hanging off a cliff. The people at the top of the cliff were unable to pull all of you up due to the weight. You are the final individual at the end of the rope. You and the friend above you have a knife in your pockets. Will you choose death and cut off the rope so that everyone may live?
    That is a peculiar situation, because mountain climbing has its unspoken (and spoken, too) rules - one of which is that in dangerous situations cutting the rope is a DUTY, not a choice. Someone that doesn't cut the rope and manages to survive won't be trusted by anyone anymore.
    Obsequium amicos, veritas odium parit

  29. #29

  30. #30
    Banned
    Join Date
    Apr 2011
    Location
    State College, PA, USA
    TIM
    SLI
    Posts
    835
    Mentioned
    1 Post(s)
    Tagged
    0 Thread(s)

    Default

    I'd be commenting in this thread, but alas, these are my work days and I don't have a lot of time.

    I myself live a sheltered life where I avoid doing anything really dangerous or taking risks, so I don't normally have to make life-or-death decisions about which people will die and which people will live. I blank it out of my mind, I pretend that those situations could never happen, I pretend that I would never have to even think about it.

    However, there are real people who do have to make those decisions. People who are in fact mountain climbing together (like in one of the examples from Inkstrider) or people in the army have to decide who risks their lives for whom.

    I've spent a while reading about economics, though, and I see something like Te and Fi working together in the business world. (I realize the OP was a contrast between Te and Ti, but I'm talking about Fi instead.) That's an example where it wouldn't be a life-or-death decision about who gets to die. It's sort of like economic cooperation. People form trusting relationships because they know how the other person will behave in particular situations. If I do this, then you will do that, and I know that ahead of time.

    I'd write more if I could, maybe later when I'm not working.

  31. #31
    Snomunegot munenori2's Avatar
    Join Date
    Oct 2007
    Location
    Kansas
    TIM
    Introvert sp/sx
    Posts
    7,742
    Mentioned
    34 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by labocat View Post
    In practice, there might be situations in which you have to do a morally questionable action in order to do a greater good. You can see this as example: Shoot the planes if they’re hijacked by terrorists kill innocent people to save many more innocent people? Or let it be and let the terrorist reach their goal because you’re never allowed to kill people? But from a utilitarian perspective, you can shoot it (given the right circumstances). The loss of the passengers is unfortunate, but if they fly into the building, not only they are dead, but hundreds of other people too. (Another example: Can I torture a criminal to find out where he placed a bomb?)
    shooting the plane is not incompatible with the CI either, since the universal principle you uphold could just as well be "save people by all means necessary".

    and yeah, you can make the CI do the same thing as utilitarianism by specifying the conditions of the law to extreme precision. not always necessary, though (like in this example). general point: the two are pretty often in agreement.

    i don't think Ti valuing types generally have a problem with "means to an end" reasoning. take Lenin, Marx, Stalin and the like for example.
    Wouldn't this be in contradiction to the second formulation of the CI that states that one must act in such a way as to not treat yourself or others as means to an end, but as ends in themselves?

    1. Act only according to that maxim whereby you can at the same time will that it should become a universal law.
    2. Act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end.
    3. Therefore, every rational being must so act as if he were through his maxim always a legislating member in the universal kingdom of ends.


    I kind of agree that what you're talking about is more like rule based utilitarianism, given the consequentialism and the implicit value placed in the maximization or minimization of good/bad. Kant's ethical thoughts seem more in line with generating a totality of imperatives that could be acted out by everyone without running into contradictions of moral judgement.

  32. #32
    Ti centric krieger's Avatar
    Join Date
    Sep 2006
    Posts
    5,937
    Mentioned
    80 Post(s)
    Tagged
    0 Thread(s)

    Default

    Act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end.
    i think the bolded word is important. as long as you've given consideration to the tragedy of lives being lost, the criterion is met.

  33. #33
    Snomunegot munenori2's Avatar
    Join Date
    Oct 2007
    Location
    Kansas
    TIM
    Introvert sp/sx
    Posts
    7,742
    Mentioned
    34 Post(s)
    Tagged
    0 Thread(s)

    Default

    In a more general retort to the OP, I think that utilitarianism and deontology provide interesting cases to see variations of the reasoning methods that Te and Ti might employ, but would probably not be that useful as a test for pointing towards anyone's actual preference. I think that utilitarianism has a more widespread appeal (to Te favoring types as well as Ti-types, such as labcoat) and that Kant's thoughts would be far more stringently Ti-favored, but that it's also pretty far out in left field compared to more grounded ethical reasoning.

    One can appreciate the sense in which he tried to systematize a subject seemingly rife with intractable disagreement, but the difficulties in feasibly acting out his project I think win him a lot of (probably deserved) detractors. At the heart of it, he wanted principles that were ultimately true, whereby debating the value or worth of particular factors was irrelevant to arriving at correct moral judgments.
    Last edited by munenori2; 09-23-2011 at 04:43 PM. Reason: GRAMMARZ

  34. #34
    Snomunegot munenori2's Avatar
    Join Date
    Oct 2007
    Location
    Kansas
    TIM
    Introvert sp/sx
    Posts
    7,742
    Mentioned
    34 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by labocat View Post
    Act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end.
    i think the bolded word is important. as long as you've given consideration to the tragedy of lives being lost, the criterion is met.
    How does consideration of the tragedy further the ends of the dudes you're blowing up?

    Edit: Unless it was a plane hijacked full of already suicidal people, but then you would be furthering an already (Kantian) blameworthy end.

  35. #35
    Ti centric krieger's Avatar
    Join Date
    Sep 2006
    Posts
    5,937
    Mentioned
    80 Post(s)
    Tagged
    0 Thread(s)

    Default

    it doesn't. that's what you give consideration to. and the "not merely" qualifier allows you to stretch the envelope in this regard when other considerations are of overwhelming import.

  36. #36
    Robot Assassin Pa3s's Avatar
    Join Date
    Mar 2010
    Location
    Germany
    TIM
    Ne-LII, 5w6
    Posts
    3,629
    Mentioned
    46 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by munenori2 View Post
    I think that utilitarianism has a more widespread appeal (to Te favoring types as well as Ti-types, such as labcoat) and that Kant's thoughts would be far more stringently Ti-favored, but that it's also pretty far out in left field compared to more grounded ethical reasoning.
    I also don't think that this could be a useful test to find out a person's preference of Ti or Te. As you already said it, if you asked, most people would probably prefer the Utilitarianism because it's less "ideal" and much more tangible.
    „Man can do what he wants but he cannot want what he wants.“
    – Arthur Schopenhauer

  37. #37
    Snomunegot munenori2's Avatar
    Join Date
    Oct 2007
    Location
    Kansas
    TIM
    Introvert sp/sx
    Posts
    7,742
    Mentioned
    34 Post(s)
    Tagged
    0 Thread(s)

    Default

    Given the way Kant himself describes it, I'd be more inclined to say that he's trying to arrive at a system where people are on the same page and where they can cooperatively and compassionately work towards common ends. I can see how you might interpret it the way you have, but personally I always saw it as a statement expressing:

    1. Do not treat others simply as a means to end.
    2. Treat others as ends in themselves.
    3. Therefore, if you treat another as a means, do so in a way that at the same time treats them as an end in themselves.

    Thus it's not really a stretchy disclaimer about remembering not to use people just to get shit done, but rather says to only use people in a way that is mutually beneficial and that it is morally impermissible in his system to do otherwise.

  38. #38
    24601 ClownsandEntropy's Avatar
    Join Date
    Mar 2010
    Location
    Melbourne, Australia
    TIM
    LII, 5w6
    Posts
    670
    Mentioned
    24 Post(s)
    Tagged
    0 Thread(s)

    Default

    Quote Originally Posted by munenori2 View Post
    In a more general retort to the OP, I think that utilitarianism and deontology provide interesting cases to see variations of the reasoning methods that Te and Ti might employ, but would probably not be that useful as a test for pointing towards anyone's actual preference.
    ^This. Kant's categorical imperative could be seen as an example of Ti, because above all it assumes people are rational and so what is moral is what can be done without being illogical (irrational). Bentham's logic also seems to be more Te-y because it focuses on the utility of everything, which seems rather Te-esque. I mean, whether one agrees with the theory is probably not type related. To begin with, Utilitarianism and Deontological Ethics are both abstract systems, they both are ways to say *this* is moral and *this* is not. It's just that Bentham considers consequences, whereas Kant considers the act.

    Essentially: while they could be an example of someone using Te vs Ti logic (which they aren't necessarily) I don't think provide much clarity to defining or indicating the difference between Te and Ti logic.
    Warm Regards,



    Clowns & Entropy

  39. #39
    EffyCold The Ineffable's Avatar
    Join Date
    Dec 2010
    Location
    Wallachia
    TIM
    ILE
    Posts
    2,191
    Mentioned
    14 Post(s)
    Tagged
    0 Thread(s)

    Default

    The problem of non-deontological ethics is that it has arbitrary fairness, correctness and therefore righteousness. Someone who became a hero by chance is not a hero, someone who formally respects religious rituals is not actually respecting them as they were intended, someone who learned everything philosophical he could read is not a philosopher. Te valuers often believe that Ti valuers just like to put labels (esp Betas), and that's because they don't put value in one's reason, but in what he can do. "It walks like a duck, talks like a duck, it is a duck", however Ti valuing sense of inner consistency (authenticity) does not permit that - the perfect imitation is not the original, and what differentiates between them is the reasons which determined that specific design of the original [1].

    The action must be a necessary consequence of the intention to be called ethics (deontological), but utilitarianism is a contradiction to it: the action is undefined.
    Quote Originally Posted by ClownsandEntropy View Post
    ^This. Kant's categorical imperative could be seen as an example of Ti, because above all it assumes people are rational and so what is moral is what can be done without being illogical (irrational).
    He called the manner of cognition that causes the latter as "historical", which is not necessarily irrational (like potentially biased or incorrect) (between "[]" are my additions for a better understanding):
    If I make complete abstraction of the content of cognition, objectively considered, all cognition is, from a subjective point of view, either historical or rational. Historical cognition is cognitio ex datis, [while] rational [is], cognitio ex principiis. Whatever may be the original source of a cognition, it is, in relation to the person who possesses it, merely historical, if he knows only what has been given him from another quarter, whether that knowledge was communicated by direct experience or by instruction. Thus the Person who has learned a system of philosophy—say the Wolfian—although he has a perfect knowledge of all the principles, definitions, and arguments in that philosophy, as well as of the divisions that have been made of the system, possesses really no more than an historical knowledge of the Wolfian system; he knows only what has been told him, his judgements are only those which he has received from his teachers. Dispute the validity of a definition, and he is completely at a loss to find another. He has formed his mind on another's; but the imitative faculty is not the productive. His knowledge has not been drawn from reason; and although, objectively considered, it is rational knowledge, subjectively, it is merely historical. He has learned this or that philosophy and is merely a plaster cast of a living man. Rational cognitions which are objective, that is, which have their source in reason, can be so termed from a subjective point of view, only when they have been drawn by the individual himself from the sources of reason, that is, from principles; and it is in this way alone that criticism, or even the rejection of what has been already learned, can spring up in the mind.
    ---

    [1] - one can't call imitation as rational, because the same intention (copying) can end-up in different designs, just to match the original. The design is therefore dictated externally and makes no internal sense.
    Shock intuition, diamond logic.
     

    The16types.info Scientific Model

  40. #40

    Join Date
    Dec 1969
    Posts
    0
    Mentioned
    Post(s)
    Tagged
    Thread(s)

    Default

    I think that the Categorical Imperative (and less so deontology in general) is more linked to Ti than Te. Categorical Imperative utilized the strictest moral laws that allowed for no exceptions in order to be right morally. The result was a standardization of moral law that allowed for every man to relate to each other on the same level (should every man follow such a restrictive system). Universality of Categorical Imperative would create a world where no cultural, societal, or personal moral differences exist. I think part of it could be defined as regulation, standardization, and synthesis of differing ideas and viewpoints into a system where these things become nonexistent (obviously there is Ne tinged in here...)

    Utilitarianism, on the other hand, I have a hard seeing as related to Te. It seems to me to be more Fi related. I personally believe that there are no moral imperatives and that each man is free to do what he wishes, though he is morally responsible for his actions according to his own system of morality. Some might say I have committed ethical suicide.

    I would postulate on something that might more plausibly replace utilitarianism as the correlating philosophy on ethics for Te, but I can't really think of anything at the moment.
    Last edited by nil; 09-25-2011 at 02:39 PM.

Page 1 of 4 1234 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •