Tuesday, January 6, 2009

Truth and Pragmatism

Criteria of Truth Based Upon Pragmatic Considerations

By Austin Cline

One common criterion of truth is simple, straightforward pragmatism: if a belief “works,” then it must be true. If it doesn’t work, then it must be false and should be discarded in favor of something else. This criterion has the distinct advantage of being readily testable — in fact, the principle that beliefs and ideas must be verified before being accepted resonates strongly in scientific circles.

The pragmatic test for truth goes a bit further than just the scientific principle of verification, however. For pragmatists, the very meaning and nature of an idea can only be discerned when it is applied to real-world situations. Ideas which are only in the mind have less substance and less relevance. It is in the actions of our lives that meaning and truth are located, not in idle speculation.

There is certainly a lot to be said for relying on pragmatism when trying to distinguish between true and false ideas. After all, you can always point to a successful test or project and demonstrate to others the validity of your beliefs. If your ideas weren’t true, they couldn’t possibly result in such success, right?

Well, maybe they could. The problem with relying heavily or even exclusively upon pragmatism is that it simply isn’t the case that only true beliefs “work” and false beliefs don’t. It’s entirely possible for the success to be the consequence of something other than your belief. For example, a doctor could prescribe a medication to a sick person and watch an illness disappear — but that doesn’t automatically mean that the medication was the cause of the improvement. Maybe it was a change in the patient’s diet, or perhaps the patient’s immune system finally won out.

In addition to false belief appearing to “work,” true beliefs can also appear to fail. Once again, factors which lie outside your knowledge and control can intervene to cause a project which should succeed to ultimately fail. This happens less often, especially in carefully controlled studies, and as a result this sort of Negative Pragmatism (failed tests point to false ideas) is a bit stronger. Nevertheless, that really only works after rigorous and repeated testing — a single failed test is often not enough to give up on an idea.

The problem here is that the world around us is much more complex than we tend to realize on a conscious level. No matter what we are doing, there are far more factors involved than we usually think of — many of which we just take for granted, like natural laws or our own memories. Some things (like natural laws) are indeed reliable, but others (like the human memory) are not nearly so reliable as we assume.

Because of this, it can be very difficult to tell whether or not something is “working” at all, much less why. When we attribute something that works to a single belief which we then conclude is true, we are often simplifying matters incredibly. Sometimes this isn’t a problem — and we do often have to simplify because, quite frankly, life and nature are just too complex to take in all at once.

However necessary simplification may be, it still introduces a level of uncertainty into our calculations and increases the chances of error. As a consequence, even though pragmatism can be a very practical and useful test for truth, it is still one which needs to be used with caution.

Monday, January 5, 2009

Philosophical Understanding of the World Through Naturalism and Idealism

by Giancarlo Massaro

In order to recognize the difference between Naturalism and Idealism, one must understand how each intertwines into the philosophical understanding of the world. Naturalism is defined as "the belief that reality consists of the natural world." On the other hand, Idealism is defined as "a set of beliefs which are a rigid system of the way life is "supposed to be" or "should be." Both of these terms had a significant value to two of the most important people involved in Philosophy, Aristotle and Plato.
Naturalism involves everything seen by the human eye. It is based upon space and time, nothing exists outside of that. Aristotle supported this term and further explained that motion is the most dominate and that everything is formed from materialism. He was the father of science and believed in form and matter. He fought against Plato's beliefs involving Idealism because he wanted proof. Naturalism has to do with reality, perception, truth, and material. These four play a major role in the true knowledge of naturalism. Everything in naturalism is driven by the laws of nature. For example, a form of naturalism would be water, it can be ice or carbonated, but no matter what it is still water in the eye of a human. Nature always acts with purpose, and the only solution to this is by discovering the true understanding of its essential purpose. By having a clear understanding of nature as it is, this can help determine a human's behavior.
Idealism involves everything that is real; something that is felt by a human. It is also a belief that one takes seriously, and holds onto. Belief is a main point with idealism; it is what drives humans to faith and hope. Senses also play a major role with idealism; they are what identifies natural things for a human being. Plato supported this and later added that nature has no morality but that everything relies on touch, smell, anything involving your senses. For example, justice, courage, and honesty cannot be felt with the hands of a human but rather felt with feelings. Also, idealism was a "source for all things" and faith was a huge factor in it.

Plato fought with Aristotle because Plato refused to believe in Aristotle's explanation of naturalism. Their beliefs were dissimilar and both felt that they were right with what they initially studied and believed in.

The similarities that are identified in are that they both have something to do with human's ability to form a conclusion of the things they observe. Idealism and Naturalism both have strong beliefs and results of why they work the way they do. Plato and Aristotle were alike in the sense that they both fought to prove a point about what their beliefs were. Both have a major significance in philosophy and are studied by college students to further our knowledge in philosophy. Also, in both there are forms. For Plato there is the immaterial, proof, form, nature, potential, and faith. Aristotle had six major ones; nature/science, form, earth, water, air, and fire. All have some effect on our world today.
The differences between the two are that one unlike the other has a sense of touch or smell, whereas the other (naturalism) includes purely vision, what is seem by the eye. Both are key factors in philosophy but are complete opposites when identifying with the outer world. They both have different jobs and aim at different thinks involved in either nature or the world. The other difference is that it is belief (idealism) VS what is actually in front of you (naturalism). When you believe in something you look to hold onto it sacred whereas what is right in front of you, there is no need for a belief.
Works Cited:
1) http://www.wsu.edu/~campbelld/amlit/natural.htm

2) http://www.allaboutphilosophy.org/naturalism.htm

3) http://www.ditext.com/moore/refute.html

4) http://www.iep.utm.edu/g/germidea.htm

Sunday, January 4, 2009

What is Pragmatism

Definition:
Pragmatism refers specifically the philosophy espoused by early American philosophers like William James and C. S. Peirce, and generally to later philosophies which are derived from those earlier efforts. As Peirce, who coined the term, wrote:

Consider what effects, which might conceivably have practical bearings, we conceive the object of our conception to have. Then our conception of these effects is the whole of our conception of the object.

William James wrote:

Ideas become true just so far as they help us to get into satisfactory relations with other parts of our experience.

According to Pragmatism, the truth or meaning of an idea or a proposition lies in its observable practical consequences rather than in anything more metaphysical. Basically, it can be summarized by the phrase "whatever works, is likely true." Because reality changes, "whatever works" will also change - thus, "truth" must also change over time. This means that no one can claim to possess any final or ultimate truth.

Pragmatism became popular with American philosophers and even the American public because of its close association with modern natural and social sciences. The scientific world view was growing in both influence and authority; pragmatism, in turn, was regarded as a philosophical sibling or cousin which was believed to be capable of producing the same progress with inquiry into subjects like morals and the meaning of life.

What is Pragmatism?:
Pragmatism is an American philosophy from the early 20th century. According to Pragmatism, the truth or meaning of an idea or a proposition lies in its observable practical consequences rather than anything metaphysical. It can be summarized by the phrase “whatever works, is likely true.” Because reality changes, “whatever works” will also change — thus, truth must also be changeable and no one can claim to possess any final or ultimate truth.

Important Books on Pragmatism:
Pragmatism, by William James
The Meaning of Truth, by William James
Logic: The Theory of Inquiry, by John Dewey
Human Nature and Conduct, by John Dewey
The Philosophy of the Act, by George H. Mead
Mind and the World Order, by C.I. Lewis

Important Philosophers of Pragmatism:
William James
C. S. (Charles Sanders) Peirce
George H. Mead
John Dewey
W.V. Quine
C.I. Lewis

Pragmatism and Natural Science:
Pragmatism became popular with American philosophers and even the American public in the early 20th century because of its close association with modern natural and social sciences. The scientific worldview was growing in both influence and authority; pragmatism, in turn, was regarded as a philosophical sibling or cousin which was believed to be capable of producing the same progress with inquiry into subjects like morals and the meaning of life.

C.S. Peirce on Pragmatism:
C.S. Peirce, who coined the term Pragmatism, saw it as more a technique to help us find solutions than a philosophy or solution to problems. Peirce used it as a means for developing linguistic and conceptual clarity (and thereby facilitate communication) with intellectual problems. He wrote:

    “Consider what effects, which might conceivably have practical bearings, we conceive the object of our conception to have. Then our conception of these effects is the whole of our conception of the object.”

William James on Pragmatism:
William James is the most famous philosopher of Pragmatism and he’s the one who made Pragmatism itself famous. For James, Pragmatism was about value and morality: the purpose of philosophy was to understand what had value to us and why. James argued that ideas and beliefs have value to us only when they work.

James wrote on Pragmatism:

    “Ideas become true just so far as they help us to get into satisfactory relations with other parts of our experience.”

John Dewey on Pragmatism:
In a philosophy he called Instrumentalism, John Dewey attempted to combine both Perice’s and James’ philosophies of Pragmatism. It was thus both about logical concepts as well as ethical analysis. Instrumentalism describes Dewey’s ideas the conditions under which reasoning and inquiry occurs. On the one hand it should be controlled by logical constraints; on the other hand it is directed at producing goods and valued satisfactions.

Cognitivism Vs. Skepticism in Philosophy

by Farzin Mojtabai

Differing Views in Philosophy

The claim that people changing their minds about right and wrong overtime meaning there is nothing to figure out is not a logically valid inference. The fact that people change their minds simply demonstrates a constant desire to correct the wrong and move over to the right. The change in people&rsquos minds indicates identification of wrong by such a person engaged in a specific mentality and exhibits a striving towards a better way. If people really had everything figured out and there was nothing left to know about right and wrong than why would any change even transpire at all? There would be a clear understanding according to the skeptic king such an argument and change would not need to take place. If “x” changes his mind about right can not necessarily mean that right and wrong is simply an opinion and there is nothing to figure out, rather it can mean that opinions and feelings towards right and wrong vary among people and can change with time and knowledge.
A counter example to the skeptic view would be the sacrifice of humans in some ancient cultures to please the sun gods and ensure it’s proper rising. With modern science and knowledge of the concise orbit of the sun and it’s pattern it was realized that such sacrifice of human life serves no purpose. Before obtaining this knowledge of the sun’s orbit no culture could claim right or wrong, because not enough was made evident and proven, meaning some validity could be ascribed unto such a practice. However, after acquiring the correct knowledge of the sun, the practice was halted and many lives were ultimately spared. This clearly demonstrates that there is a great deal to know and clearly a right and wrong in certain situations, many times seeing change in people indicates and acknowledgement of the right and an end to the right.

A cognitivist is one who believes that there is a clear right and wrong to be figured out, however does not claim to always know what that right and wrong is. Just because there is right and wrong does not necessarily mean that it is known, rather we are striving to figure it out. When looking at the history of mankind, it is through experience, many times painful that one’s understanding evolves regarding right and wrong. It is through seeing pain and hardship of slavery and the holocaust that we identify slave labor and genocide and being wrong. Whether we act upon stopping all instances of wrong is one thing, but we still figured out they are wrong by the mental and physical abuse orchestrated upon people by other people. How could we even identify what slavery is if we didn’t see and experience it. Under the skeptic view knowledge is simply what someone thinks, where that assumption is off point. A cognitivist could claim people did not identify the wrong in such acts in slave labor early in human history because not enough of it had occurred to develop a
proper feeling that this action is wrong because it inflicts harm on people. The idea is to learn from history and formulate better and morally acceptable way of living based on the mistakes of the past. Early humans had nothing to judge their actions against. They also lacked the clear knowledge indicating these things were wrong, which the cognitivist knows is there, but simply needs to be discovered. For instance, human sacrifice as mentioned above came in many forms and many times, not with the intentions of harming but under misguided beliefs like the sun will rise or something. Once people were more privy to things like the sun rising or the lack of need to sacrifice all widowed women causing a dwindling population, they escaped their wrongful mentality. Or people could say the as long as it benefits certain people and gets things done that it is okay to use a certain population for slave labor, who are unfortunate or different. However as such customs grew and spread to various lands, more and more were suffering and more empathy was obviously taking place. People began to draw on experience and learn and change their ways.
Good reasons to study and form moral assessments about other cultures is to compare that against your own standards, to have a more objective view of your own cultural practices from a different perspective. How can you ever know whether your own society is morally in the right if that is all you know. Lack of knowledge can always lead to wrong practices and injustice. To witness other injustice from the view of an outsider and simply observe immorality gives one new look of their own culture, which they never had thought about before. One also can morally assess other cultures in order to attempt to correct moral wrongs perceived by them subsisting in that society. For instance the Sudan genocide where 400,000 people have been slaughtered and millions displaced can be assessed from a foreign perspective as morally reprehensible and can spur action to correct such injustice. Were it not for foreign intervention the Rwanda genocide could have still been occurring today. Above all associations we give to ourselves, we are first and foremast human beings, and to objectively assess others cultures can teach you and them a lot and advance a more universal right and wrong.
I think one might say we should never or should cautiously assess practices of another culture is fear that to critique another’s culture could antagonize that culture and give an impression that you are condescending towards their cultural practices. This cultural relativist idea that what is right is simply what is accepted by the culture as being right aims to slam the door shut on any moral intervention among cultures. The memory of years of colonialism leave people much more cautious when dealing with and interacting with foreign cultures. They fear that a traditional culture practice like trying out ones sword although appearing brutal, might be accepted and looking down on that culture for such actions is beyond our realm.

Saturday, January 3, 2009

What is Idealism? History of Idealism, Idealist Philosophy, Idealists

What is Idealism?:
Idealism refers to any philosophy that argues that reality is somehow dependent upon the mind rather than independent of it. More extreme versions will deny that the “world” even exists outside of our minds. Narrow versions argue that our understanding of reality reflects the workings of our mind first and foremost — that the properties of objects have no standing independent of minds perceiving them.
Important Books on Idealism:
The World and the Individual, by Josiah Royce
Principles of Human Knowledge, by George Berkeley
Phenomenology of Spirit, by G.W.F. Hegel
Important Philosophers of Idealism:
Plato
Gottfried Wilhelm Leibniz
Georg Wilhelm Friedrich Hegel
Immanuel Kant
George Berkeley
Josiah Royce
What is the “Mind” in Idealism?:
The nature and identity of the “mind” upon which reality is dependent is one issue that has divided idealists of various sorts. Some argue that there is some objective mind outside of nature, some argue that it is simply the common power of reason or rationality, some argue that it is the collective mental faculties of society, and some focus simply on the minds of individual human beings.
Platonic Idealism: According to Platonic Idealism, there exists a perfect realm of Form and Ideas and our world merely contains shadows of that realm.
Subjective Idealism: According to Subjective Idealism, only ideas can be known or have any reality (it is also known as solipsism).
Transcendental Idealism:
According to Transcendental Idealism, developed by Kant, this theory argues that all knowledge originated in perceived phenomena which have been organized by categories.
Absolute Idealism:
According to Absolute Idealism, all objects are identical with some idea and the ideal knowledge is itself the system of ideas. It is also known as Objective Idealism and is the sort of idealism promoted by Hegel. Unlike the other forms of idealism, this is monistic — there is only one mind in which reality is created.

Friday, January 2, 2009

Subjective Idealism

by: Braedon Betzner

According to the Encyclopedia Britannica, Berkeley believed that "nothing exist except minds and spirits and their perceptions or ideas" (subjective). He suggested that if an idea is not perceived by a mind; then the idea does not exist. I beg to differ, if no one perceives a thing or idea it does not mean it does not exist. It can still exist without an individual perceiving it. For instance, the idea if God truly exists or not. Even if an individual does not perceive that he exists, it can still be proven that the idea of God does exist by others. It can be shown by the unexplained miracles and multiple chances given to undeserving people. Lisa Downing states that Berkeley argues for the existence of God by stating, "Ideas which depend on our own finite human wills are not (constituents of) real things."
In order for an idea to be perceived, it must be perceived by an individual's sensations. What if an individual is lacking one or more senses to perceive an idea? For example, if someone was blind and deaf since birth and could not see or hear. Therefore, they cannot perceive an idea visually or by hearing it. If there were a door in the blind/deaf individual's way, could he perceive it? If he could not see it in his way or hear it open or close, would he perceive it? As mentioned in the Encyclopedia Britannica, "reality of the outside world is contingent of a knower" (subjective). Therefore, if the blind/deaf individual cannot see or hear the door, then the idea of the door does not exist in his reality. However, that does not mean it does not exist in another individual's reality. If I see the situation of the blind/deaf man about to run into the door, I would be aware of the situation with my senses. I would be able to see that he was about to hit the door and I would be able to warn the individual about the door. So just because someone cannot perceive an idea does not mean it does not exist. It just may not exist solely in their reality but to others it may exist.
In regards to Berkeley's Esse Est Percipi proposition, I still do not think it is valid. If we do not perceive an idea than someone else, might perceive it for us and make it exist.

Downing, Lisa. "George Berkeley." Stanford Encyclopedia of Philosophy. 10 Sep.2004. Stanford University. 07 Aug. 2008.
"Subjective Idealism." Encyclopedia Britannica. 2008. Encyclopedia Britannica Online. 06 Aug. 2008Subjective-idealism>.

Skepticism

Skepticism is the Western philosophical tradition that maintains that human beings can never arrive at any kind of certain knowledge. Originating in Greece in the middle of the fourth century BC, skepticism and its derivatives are based on the following principles:

  • There is no such thing as certainty in human knowledge.
  • All human knowledge is only probably true, that is, true most of the time, or not true.

Several non-Western cultures have skeptical traditions, particularly Buddhist philosophy, but properly speaking, skepticism refers only to a Greek philosophical tradition and its Greek, Roman, and European derivatives.

   The school of Skeptic philosophers were called the "Skeptikoi" in Greece. The word is derived from the Greek verb, "skeptomai," which means "to look carefully, to reflect." The hallmark of the skeptikoi was caution; they refused to be caught in assertions that could be proven false. In fact, the entire system of skeptic philosophy was to present all knowledge as opinion only, that is, to assert nothing as true.

  In this, they were firmly planted in a tradition started a century earlier by Socrates. Socrates claimed that he knew one and only one thing: that he knew nothing. So he would never go about making any assertions or opinions whatsoever. Instead, he set about questioning people who claimed to have knowledge, ostensibly for the purpose of learning from them, using a judicial cross-examination, called elenchus . If someone made an assertion, such as, "Virtue means acting in accordance with public morality, " he would keep questioning the speaker until he had forced him into a contradiction. As in a court of law, this contradiction proved that the speaker was lying in some way, in this case, that the speaker did not really know what they claimed to know. If an assertion can be worked into a contradiction, that means that the original assertion was wrong. While Socrates never claimed that knowledge is impossible, still, at his death, he never claimed to have discovered any piece of knowledge whatsoever.

   After its introduction into Greek culture at the end of the fourth century BC, skepticism influenced nearly all other Greek philosophies. Both Hellenistic and Roman philosophies took it as a given that certain knowledge was impossible; the focus of Greek and Roman philosophy, then, turned to probable knowledge, that is, knowledge that is true most of the time.
   Christianity, however, introduced a dilemma into Greek and Roman philosophies that were primarily based on skeptical principles. In many ways, the philosophy of Christianity, which insisted on an absolute knowledge of the divine and of ethics, did not fit the Greek and Roman skeptical emphasis on probable knowledge. Paul of Tarsus, one of the original founders of Christianity, answered this question simply: the knowledge of the Romans and Greeks, that is, human knowledge, is the knowledge of fools. Knowledge that rejects human reasoning, which, after all, leads to skepticism, is the knowledge of the wise. Christianity at its inception, then, had a strong anti-rational perspective. This did not, however, make the skeptical problem go away. Much of the history of early Christian philosophy is an attempt to paste Greek and Roman philosophical methods and questions onto the new religion; the first thing that had to go was the insistence on skepticism and probable knowledge. So early Christian thinkers such as Augustine and Boethius took on the epistemological traditions of Greece and Rome to demonstrate that one could arrive at certain knowledge in matters of Christian religion. Augustine devoted an entire book, "Against the Academics," to proving that human beings can indeed arrive at certain knowledge.
   Skepticism, however, was radically reintroduced into Western culture in the sixteenth and seventeenth centuries. The break-up of the Christian church and the bloodshed that followed it led people to seriously question religious and philosophical traditions that had long been unquestioned. Thinkers such as Montaigne in France and Francis Bacon in England took as their starting point the idea that they knew nothing for certain, particularly religious truth. Montaigne would invent an entirely new literary format which he called the essai , or "attempt, trial"; this is the origin of the modern-day essay. The "essay" took as its starting point the idea that the writer doesn't really know what he's talking about. Montaigne would propose an issue, walk around the issue for awhile and consider various alternatives, and then end pretty much where he started: uncertain what conclusion to draw. This is why he called his writings, "attempts" (essais in French), for they were attempts at drawing conclusions rather than finished products.

  The most radical introduction of Greek skeptical traditions back into the Western tradition occurred in the works of Blaise Pascal and René Descartes. Both thinkers refused to accept any piece of knowledge whatsoever as true, and both tried to rebuild a Christian faith based on this radical questioning of truth.

   Descartes set about reinventing Western epistemology with a radical perspective: what if nothing were true? How, if you doubted everything, could you find something—anything—that was true. His conclusion, of course, was the famous cogito : Cogito ergo sum , or, "I think, therefore I am." From this base he built up a series of other true propositions, including the existence of God. In many ways, Descartes was trying to accomplish the same thing that Augustine, Boethius, and other early Christian thinkers were attempting: how do you address the possibility, firmly entrenched in the Western tradition, that there may be no such thing as certain knowledge? How do you reconcile that with religious faith? For that was Descartes' ultimate goal: to prove the existence of God and the validity of the Christian religion.
   Although he saw himself as answering old and vexing questions in the Christian tradition, he actually created a radically new way of approaching the world: systematic doubt. The hallmark of Cartesianism is setting up a formal system of doubt, that is, of questioning all propositions and conclusions using a formal system. Once one has arrived at a certain piece of knowledge, that piece of knowledge then becomes the basis for clearing up other doubts. Descartes systematic doubt became the basis of the Enlightenment and modern scientific tradition. One begins with a proposition, or hypothesis, that is in doubt and then tests that proposition until one arrives, more or less, at a certain conclusion. That does not, however, end the story. When confronted by the conclusions of others, one's job is to doubt those conclusions and redo the tests. Once a hypothesis has been tested and retested, then one can conclude that one has arrived at a "scientific truth." That, of course, doesn't end it, for all scientific truths can be doubted sometime in the future. In other words, although scientists speak about certainty and truth all the time, the foundational epistemology is skeptical: doubt anything and everything.