Tuesday, January 6, 2009

Truth and Pragmatism

Criteria of Truth Based Upon Pragmatic Considerations

By Austin Cline

One common criterion of truth is simple, straightforward pragmatism: if a belief “works,” then it must be true. If it doesn’t work, then it must be false and should be discarded in favor of something else. This criterion has the distinct advantage of being readily testable — in fact, the principle that beliefs and ideas must be verified before being accepted resonates strongly in scientific circles.

The pragmatic test for truth goes a bit further than just the scientific principle of verification, however. For pragmatists, the very meaning and nature of an idea can only be discerned when it is applied to real-world situations. Ideas which are only in the mind have less substance and less relevance. It is in the actions of our lives that meaning and truth are located, not in idle speculation.

There is certainly a lot to be said for relying on pragmatism when trying to distinguish between true and false ideas. After all, you can always point to a successful test or project and demonstrate to others the validity of your beliefs. If your ideas weren’t true, they couldn’t possibly result in such success, right?

Well, maybe they could. The problem with relying heavily or even exclusively upon pragmatism is that it simply isn’t the case that only true beliefs “work” and false beliefs don’t. It’s entirely possible for the success to be the consequence of something other than your belief. For example, a doctor could prescribe a medication to a sick person and watch an illness disappear — but that doesn’t automatically mean that the medication was the cause of the improvement. Maybe it was a change in the patient’s diet, or perhaps the patient’s immune system finally won out.

In addition to false belief appearing to “work,” true beliefs can also appear to fail. Once again, factors which lie outside your knowledge and control can intervene to cause a project which should succeed to ultimately fail. This happens less often, especially in carefully controlled studies, and as a result this sort of Negative Pragmatism (failed tests point to false ideas) is a bit stronger. Nevertheless, that really only works after rigorous and repeated testing — a single failed test is often not enough to give up on an idea.

The problem here is that the world around us is much more complex than we tend to realize on a conscious level. No matter what we are doing, there are far more factors involved than we usually think of — many of which we just take for granted, like natural laws or our own memories. Some things (like natural laws) are indeed reliable, but others (like the human memory) are not nearly so reliable as we assume.

Because of this, it can be very difficult to tell whether or not something is “working” at all, much less why. When we attribute something that works to a single belief which we then conclude is true, we are often simplifying matters incredibly. Sometimes this isn’t a problem — and we do often have to simplify because, quite frankly, life and nature are just too complex to take in all at once.

However necessary simplification may be, it still introduces a level of uncertainty into our calculations and increases the chances of error. As a consequence, even though pragmatism can be a very practical and useful test for truth, it is still one which needs to be used with caution.

Monday, January 5, 2009

Philosophical Understanding of the World Through Naturalism and Idealism

by Giancarlo Massaro

In order to recognize the difference between Naturalism and Idealism, one must understand how each intertwines into the philosophical understanding of the world. Naturalism is defined as "the belief that reality consists of the natural world." On the other hand, Idealism is defined as "a set of beliefs which are a rigid system of the way life is "supposed to be" or "should be." Both of these terms had a significant value to two of the most important people involved in Philosophy, Aristotle and Plato.
Naturalism involves everything seen by the human eye. It is based upon space and time, nothing exists outside of that. Aristotle supported this term and further explained that motion is the most dominate and that everything is formed from materialism. He was the father of science and believed in form and matter. He fought against Plato's beliefs involving Idealism because he wanted proof. Naturalism has to do with reality, perception, truth, and material. These four play a major role in the true knowledge of naturalism. Everything in naturalism is driven by the laws of nature. For example, a form of naturalism would be water, it can be ice or carbonated, but no matter what it is still water in the eye of a human. Nature always acts with purpose, and the only solution to this is by discovering the true understanding of its essential purpose. By having a clear understanding of nature as it is, this can help determine a human's behavior.
Idealism involves everything that is real; something that is felt by a human. It is also a belief that one takes seriously, and holds onto. Belief is a main point with idealism; it is what drives humans to faith and hope. Senses also play a major role with idealism; they are what identifies natural things for a human being. Plato supported this and later added that nature has no morality but that everything relies on touch, smell, anything involving your senses. For example, justice, courage, and honesty cannot be felt with the hands of a human but rather felt with feelings. Also, idealism was a "source for all things" and faith was a huge factor in it.

Plato fought with Aristotle because Plato refused to believe in Aristotle's explanation of naturalism. Their beliefs were dissimilar and both felt that they were right with what they initially studied and believed in.

The similarities that are identified in are that they both have something to do with human's ability to form a conclusion of the things they observe. Idealism and Naturalism both have strong beliefs and results of why they work the way they do. Plato and Aristotle were alike in the sense that they both fought to prove a point about what their beliefs were. Both have a major significance in philosophy and are studied by college students to further our knowledge in philosophy. Also, in both there are forms. For Plato there is the immaterial, proof, form, nature, potential, and faith. Aristotle had six major ones; nature/science, form, earth, water, air, and fire. All have some effect on our world today.
The differences between the two are that one unlike the other has a sense of touch or smell, whereas the other (naturalism) includes purely vision, what is seem by the eye. Both are key factors in philosophy but are complete opposites when identifying with the outer world. They both have different jobs and aim at different thinks involved in either nature or the world. The other difference is that it is belief (idealism) VS what is actually in front of you (naturalism). When you believe in something you look to hold onto it sacred whereas what is right in front of you, there is no need for a belief.
Works Cited:
1) http://www.wsu.edu/~campbelld/amlit/natural.htm

2) http://www.allaboutphilosophy.org/naturalism.htm

3) http://www.ditext.com/moore/refute.html

4) http://www.iep.utm.edu/g/germidea.htm

Sunday, January 4, 2009

What is Pragmatism

Definition:
Pragmatism refers specifically the philosophy espoused by early American philosophers like William James and C. S. Peirce, and generally to later philosophies which are derived from those earlier efforts. As Peirce, who coined the term, wrote:

Consider what effects, which might conceivably have practical bearings, we conceive the object of our conception to have. Then our conception of these effects is the whole of our conception of the object.

William James wrote:

Ideas become true just so far as they help us to get into satisfactory relations with other parts of our experience.

According to Pragmatism, the truth or meaning of an idea or a proposition lies in its observable practical consequences rather than in anything more metaphysical. Basically, it can be summarized by the phrase "whatever works, is likely true." Because reality changes, "whatever works" will also change - thus, "truth" must also change over time. This means that no one can claim to possess any final or ultimate truth.

Pragmatism became popular with American philosophers and even the American public because of its close association with modern natural and social sciences. The scientific world view was growing in both influence and authority; pragmatism, in turn, was regarded as a philosophical sibling or cousin which was believed to be capable of producing the same progress with inquiry into subjects like morals and the meaning of life.

What is Pragmatism?:
Pragmatism is an American philosophy from the early 20th century. According to Pragmatism, the truth or meaning of an idea or a proposition lies in its observable practical consequences rather than anything metaphysical. It can be summarized by the phrase “whatever works, is likely true.” Because reality changes, “whatever works” will also change — thus, truth must also be changeable and no one can claim to possess any final or ultimate truth.

Important Books on Pragmatism:
Pragmatism, by William James
The Meaning of Truth, by William James
Logic: The Theory of Inquiry, by John Dewey
Human Nature and Conduct, by John Dewey
The Philosophy of the Act, by George H. Mead
Mind and the World Order, by C.I. Lewis

Important Philosophers of Pragmatism:
William James
C. S. (Charles Sanders) Peirce
George H. Mead
John Dewey
W.V. Quine
C.I. Lewis

Pragmatism and Natural Science:
Pragmatism became popular with American philosophers and even the American public in the early 20th century because of its close association with modern natural and social sciences. The scientific worldview was growing in both influence and authority; pragmatism, in turn, was regarded as a philosophical sibling or cousin which was believed to be capable of producing the same progress with inquiry into subjects like morals and the meaning of life.

C.S. Peirce on Pragmatism:
C.S. Peirce, who coined the term Pragmatism, saw it as more a technique to help us find solutions than a philosophy or solution to problems. Peirce used it as a means for developing linguistic and conceptual clarity (and thereby facilitate communication) with intellectual problems. He wrote:

    “Consider what effects, which might conceivably have practical bearings, we conceive the object of our conception to have. Then our conception of these effects is the whole of our conception of the object.”

William James on Pragmatism:
William James is the most famous philosopher of Pragmatism and he’s the one who made Pragmatism itself famous. For James, Pragmatism was about value and morality: the purpose of philosophy was to understand what had value to us and why. James argued that ideas and beliefs have value to us only when they work.

James wrote on Pragmatism:

    “Ideas become true just so far as they help us to get into satisfactory relations with other parts of our experience.”

John Dewey on Pragmatism:
In a philosophy he called Instrumentalism, John Dewey attempted to combine both Perice’s and James’ philosophies of Pragmatism. It was thus both about logical concepts as well as ethical analysis. Instrumentalism describes Dewey’s ideas the conditions under which reasoning and inquiry occurs. On the one hand it should be controlled by logical constraints; on the other hand it is directed at producing goods and valued satisfactions.

Cognitivism Vs. Skepticism in Philosophy

by Farzin Mojtabai

Differing Views in Philosophy

The claim that people changing their minds about right and wrong overtime meaning there is nothing to figure out is not a logically valid inference. The fact that people change their minds simply demonstrates a constant desire to correct the wrong and move over to the right. The change in people&rsquos minds indicates identification of wrong by such a person engaged in a specific mentality and exhibits a striving towards a better way. If people really had everything figured out and there was nothing left to know about right and wrong than why would any change even transpire at all? There would be a clear understanding according to the skeptic king such an argument and change would not need to take place. If “x” changes his mind about right can not necessarily mean that right and wrong is simply an opinion and there is nothing to figure out, rather it can mean that opinions and feelings towards right and wrong vary among people and can change with time and knowledge.
A counter example to the skeptic view would be the sacrifice of humans in some ancient cultures to please the sun gods and ensure it’s proper rising. With modern science and knowledge of the concise orbit of the sun and it’s pattern it was realized that such sacrifice of human life serves no purpose. Before obtaining this knowledge of the sun’s orbit no culture could claim right or wrong, because not enough was made evident and proven, meaning some validity could be ascribed unto such a practice. However, after acquiring the correct knowledge of the sun, the practice was halted and many lives were ultimately spared. This clearly demonstrates that there is a great deal to know and clearly a right and wrong in certain situations, many times seeing change in people indicates and acknowledgement of the right and an end to the right.

A cognitivist is one who believes that there is a clear right and wrong to be figured out, however does not claim to always know what that right and wrong is. Just because there is right and wrong does not necessarily mean that it is known, rather we are striving to figure it out. When looking at the history of mankind, it is through experience, many times painful that one’s understanding evolves regarding right and wrong. It is through seeing pain and hardship of slavery and the holocaust that we identify slave labor and genocide and being wrong. Whether we act upon stopping all instances of wrong is one thing, but we still figured out they are wrong by the mental and physical abuse orchestrated upon people by other people. How could we even identify what slavery is if we didn’t see and experience it. Under the skeptic view knowledge is simply what someone thinks, where that assumption is off point. A cognitivist could claim people did not identify the wrong in such acts in slave labor early in human history because not enough of it had occurred to develop a
proper feeling that this action is wrong because it inflicts harm on people. The idea is to learn from history and formulate better and morally acceptable way of living based on the mistakes of the past. Early humans had nothing to judge their actions against. They also lacked the clear knowledge indicating these things were wrong, which the cognitivist knows is there, but simply needs to be discovered. For instance, human sacrifice as mentioned above came in many forms and many times, not with the intentions of harming but under misguided beliefs like the sun will rise or something. Once people were more privy to things like the sun rising or the lack of need to sacrifice all widowed women causing a dwindling population, they escaped their wrongful mentality. Or people could say the as long as it benefits certain people and gets things done that it is okay to use a certain population for slave labor, who are unfortunate or different. However as such customs grew and spread to various lands, more and more were suffering and more empathy was obviously taking place. People began to draw on experience and learn and change their ways.
Good reasons to study and form moral assessments about other cultures is to compare that against your own standards, to have a more objective view of your own cultural practices from a different perspective. How can you ever know whether your own society is morally in the right if that is all you know. Lack of knowledge can always lead to wrong practices and injustice. To witness other injustice from the view of an outsider and simply observe immorality gives one new look of their own culture, which they never had thought about before. One also can morally assess other cultures in order to attempt to correct moral wrongs perceived by them subsisting in that society. For instance the Sudan genocide where 400,000 people have been slaughtered and millions displaced can be assessed from a foreign perspective as morally reprehensible and can spur action to correct such injustice. Were it not for foreign intervention the Rwanda genocide could have still been occurring today. Above all associations we give to ourselves, we are first and foremast human beings, and to objectively assess others cultures can teach you and them a lot and advance a more universal right and wrong.
I think one might say we should never or should cautiously assess practices of another culture is fear that to critique another’s culture could antagonize that culture and give an impression that you are condescending towards their cultural practices. This cultural relativist idea that what is right is simply what is accepted by the culture as being right aims to slam the door shut on any moral intervention among cultures. The memory of years of colonialism leave people much more cautious when dealing with and interacting with foreign cultures. They fear that a traditional culture practice like trying out ones sword although appearing brutal, might be accepted and looking down on that culture for such actions is beyond our realm.

Saturday, January 3, 2009

What is Idealism? History of Idealism, Idealist Philosophy, Idealists

What is Idealism?:
Idealism refers to any philosophy that argues that reality is somehow dependent upon the mind rather than independent of it. More extreme versions will deny that the “world” even exists outside of our minds. Narrow versions argue that our understanding of reality reflects the workings of our mind first and foremost — that the properties of objects have no standing independent of minds perceiving them.
Important Books on Idealism:
The World and the Individual, by Josiah Royce
Principles of Human Knowledge, by George Berkeley
Phenomenology of Spirit, by G.W.F. Hegel
Important Philosophers of Idealism:
Plato
Gottfried Wilhelm Leibniz
Georg Wilhelm Friedrich Hegel
Immanuel Kant
George Berkeley
Josiah Royce
What is the “Mind” in Idealism?:
The nature and identity of the “mind” upon which reality is dependent is one issue that has divided idealists of various sorts. Some argue that there is some objective mind outside of nature, some argue that it is simply the common power of reason or rationality, some argue that it is the collective mental faculties of society, and some focus simply on the minds of individual human beings.
Platonic Idealism: According to Platonic Idealism, there exists a perfect realm of Form and Ideas and our world merely contains shadows of that realm.
Subjective Idealism: According to Subjective Idealism, only ideas can be known or have any reality (it is also known as solipsism).
Transcendental Idealism:
According to Transcendental Idealism, developed by Kant, this theory argues that all knowledge originated in perceived phenomena which have been organized by categories.
Absolute Idealism:
According to Absolute Idealism, all objects are identical with some idea and the ideal knowledge is itself the system of ideas. It is also known as Objective Idealism and is the sort of idealism promoted by Hegel. Unlike the other forms of idealism, this is monistic — there is only one mind in which reality is created.

Friday, January 2, 2009

Subjective Idealism

by: Braedon Betzner

According to the Encyclopedia Britannica, Berkeley believed that "nothing exist except minds and spirits and their perceptions or ideas" (subjective). He suggested that if an idea is not perceived by a mind; then the idea does not exist. I beg to differ, if no one perceives a thing or idea it does not mean it does not exist. It can still exist without an individual perceiving it. For instance, the idea if God truly exists or not. Even if an individual does not perceive that he exists, it can still be proven that the idea of God does exist by others. It can be shown by the unexplained miracles and multiple chances given to undeserving people. Lisa Downing states that Berkeley argues for the existence of God by stating, "Ideas which depend on our own finite human wills are not (constituents of) real things."
In order for an idea to be perceived, it must be perceived by an individual's sensations. What if an individual is lacking one or more senses to perceive an idea? For example, if someone was blind and deaf since birth and could not see or hear. Therefore, they cannot perceive an idea visually or by hearing it. If there were a door in the blind/deaf individual's way, could he perceive it? If he could not see it in his way or hear it open or close, would he perceive it? As mentioned in the Encyclopedia Britannica, "reality of the outside world is contingent of a knower" (subjective). Therefore, if the blind/deaf individual cannot see or hear the door, then the idea of the door does not exist in his reality. However, that does not mean it does not exist in another individual's reality. If I see the situation of the blind/deaf man about to run into the door, I would be aware of the situation with my senses. I would be able to see that he was about to hit the door and I would be able to warn the individual about the door. So just because someone cannot perceive an idea does not mean it does not exist. It just may not exist solely in their reality but to others it may exist.
In regards to Berkeley's Esse Est Percipi proposition, I still do not think it is valid. If we do not perceive an idea than someone else, might perceive it for us and make it exist.

Downing, Lisa. "George Berkeley." Stanford Encyclopedia of Philosophy. 10 Sep.2004. Stanford University. 07 Aug. 2008.
"Subjective Idealism." Encyclopedia Britannica. 2008. Encyclopedia Britannica Online. 06 Aug. 2008Subjective-idealism>.

Skepticism

Skepticism is the Western philosophical tradition that maintains that human beings can never arrive at any kind of certain knowledge. Originating in Greece in the middle of the fourth century BC, skepticism and its derivatives are based on the following principles:

  • There is no such thing as certainty in human knowledge.
  • All human knowledge is only probably true, that is, true most of the time, or not true.

Several non-Western cultures have skeptical traditions, particularly Buddhist philosophy, but properly speaking, skepticism refers only to a Greek philosophical tradition and its Greek, Roman, and European derivatives.

   The school of Skeptic philosophers were called the "Skeptikoi" in Greece. The word is derived from the Greek verb, "skeptomai," which means "to look carefully, to reflect." The hallmark of the skeptikoi was caution; they refused to be caught in assertions that could be proven false. In fact, the entire system of skeptic philosophy was to present all knowledge as opinion only, that is, to assert nothing as true.

  In this, they were firmly planted in a tradition started a century earlier by Socrates. Socrates claimed that he knew one and only one thing: that he knew nothing. So he would never go about making any assertions or opinions whatsoever. Instead, he set about questioning people who claimed to have knowledge, ostensibly for the purpose of learning from them, using a judicial cross-examination, called elenchus . If someone made an assertion, such as, "Virtue means acting in accordance with public morality, " he would keep questioning the speaker until he had forced him into a contradiction. As in a court of law, this contradiction proved that the speaker was lying in some way, in this case, that the speaker did not really know what they claimed to know. If an assertion can be worked into a contradiction, that means that the original assertion was wrong. While Socrates never claimed that knowledge is impossible, still, at his death, he never claimed to have discovered any piece of knowledge whatsoever.

   After its introduction into Greek culture at the end of the fourth century BC, skepticism influenced nearly all other Greek philosophies. Both Hellenistic and Roman philosophies took it as a given that certain knowledge was impossible; the focus of Greek and Roman philosophy, then, turned to probable knowledge, that is, knowledge that is true most of the time.
   Christianity, however, introduced a dilemma into Greek and Roman philosophies that were primarily based on skeptical principles. In many ways, the philosophy of Christianity, which insisted on an absolute knowledge of the divine and of ethics, did not fit the Greek and Roman skeptical emphasis on probable knowledge. Paul of Tarsus, one of the original founders of Christianity, answered this question simply: the knowledge of the Romans and Greeks, that is, human knowledge, is the knowledge of fools. Knowledge that rejects human reasoning, which, after all, leads to skepticism, is the knowledge of the wise. Christianity at its inception, then, had a strong anti-rational perspective. This did not, however, make the skeptical problem go away. Much of the history of early Christian philosophy is an attempt to paste Greek and Roman philosophical methods and questions onto the new religion; the first thing that had to go was the insistence on skepticism and probable knowledge. So early Christian thinkers such as Augustine and Boethius took on the epistemological traditions of Greece and Rome to demonstrate that one could arrive at certain knowledge in matters of Christian religion. Augustine devoted an entire book, "Against the Academics," to proving that human beings can indeed arrive at certain knowledge.
   Skepticism, however, was radically reintroduced into Western culture in the sixteenth and seventeenth centuries. The break-up of the Christian church and the bloodshed that followed it led people to seriously question religious and philosophical traditions that had long been unquestioned. Thinkers such as Montaigne in France and Francis Bacon in England took as their starting point the idea that they knew nothing for certain, particularly religious truth. Montaigne would invent an entirely new literary format which he called the essai , or "attempt, trial"; this is the origin of the modern-day essay. The "essay" took as its starting point the idea that the writer doesn't really know what he's talking about. Montaigne would propose an issue, walk around the issue for awhile and consider various alternatives, and then end pretty much where he started: uncertain what conclusion to draw. This is why he called his writings, "attempts" (essais in French), for they were attempts at drawing conclusions rather than finished products.

  The most radical introduction of Greek skeptical traditions back into the Western tradition occurred in the works of Blaise Pascal and René Descartes. Both thinkers refused to accept any piece of knowledge whatsoever as true, and both tried to rebuild a Christian faith based on this radical questioning of truth.

   Descartes set about reinventing Western epistemology with a radical perspective: what if nothing were true? How, if you doubted everything, could you find something—anything—that was true. His conclusion, of course, was the famous cogito : Cogito ergo sum , or, "I think, therefore I am." From this base he built up a series of other true propositions, including the existence of God. In many ways, Descartes was trying to accomplish the same thing that Augustine, Boethius, and other early Christian thinkers were attempting: how do you address the possibility, firmly entrenched in the Western tradition, that there may be no such thing as certain knowledge? How do you reconcile that with religious faith? For that was Descartes' ultimate goal: to prove the existence of God and the validity of the Christian religion.
   Although he saw himself as answering old and vexing questions in the Christian tradition, he actually created a radically new way of approaching the world: systematic doubt. The hallmark of Cartesianism is setting up a formal system of doubt, that is, of questioning all propositions and conclusions using a formal system. Once one has arrived at a certain piece of knowledge, that piece of knowledge then becomes the basis for clearing up other doubts. Descartes systematic doubt became the basis of the Enlightenment and modern scientific tradition. One begins with a proposition, or hypothesis, that is in doubt and then tests that proposition until one arrives, more or less, at a certain conclusion. That does not, however, end the story. When confronted by the conclusions of others, one's job is to doubt those conclusions and redo the tests. Once a hypothesis has been tested and retested, then one can conclude that one has arrived at a "scientific truth." That, of course, doesn't end it, for all scientific truths can be doubted sometime in the future. In other words, although scientists speak about certainty and truth all the time, the foundational epistemology is skeptical: doubt anything and everything.

Skepticism

by Gregory Alter,

Skepticism is a key tool in philosophy. Socrates used it to uncover untruths, false beliefs, opinion and speculation. It is an attitude of doubt or incredulity towards knowledge or assertions. It is closely connected with understanding the limitations of knowledge.

When Descartes discovered his famous philosophical statement “Cogito ergo sum” (latin: “I think, therefore I am.”) he was himself skeptical about how we could know anything. How can we know what is the truth? How can we know that we do know anything, really, truly? All might be an illusion. Descartes asked himself, “how can I know that I exist?” In answer he discovered that since he was able to question his own existence, then he has to exist to do so.

In a way Descartes discovery was the ultimate in skepticism, and indeed it led directly to the ultimate answer; no longer can we doubt our own existence, but that did not help us in determining the existence of everything else. What if the whole universe is an illusion? Alas, the answer to that one is not so forthcoming.

Still it is important, in this modern world, to question everything we are told. Especially anything that involves common sense. This may seem as a paradox, as many people believe philosophy to be 'common sense' but in reality common sense is ingrained within us by the society in which we live and is connected with culture and belief systems. The danger of common sense is that one can be swept off into believing that common sense answers everything and tells us all we need to know about reality.

How do we know that we 'need' to work for example? Those who do not work, may in fact be happy not working (assuming they have social benefits or some other means to sustain themselves) but they will find themselves being subject to a large amount of pressure from society. Helpful friends and relatives will hint, help, and pester them to work, and try to find them a 'job'.

Our ancestors were hunters, and it is now believed that hunter societies had a large amount of free time for themselves and social activities. Would that not be ideal? Free time to do what you like? To enjoy life? These days however the possibility of returning to a hunter gather society is nonexistent, but nevertheless the realization that there are other ways of structuring society are key to the process of overcoming fixed beliefs. In actuality, growing food requires a large amount of labor and time (in comparison to hunting).

It was the transition to settled farming communities which created the need for laborers. In early communities this was a shared task, but as time proceeded communities became larger and the society more stratified, leaving the lowest and poorest with the burden of the work. This state of affairs was amplified throughout human history culminating with the serf of the middle ages.

The consumerism of modern times, one might think, differs from these earlier and barbaric times, but looking closely at the distribution of wealth, one sees that the system is different but we are still ruled by an elite few. We do not notice because we have choices, and a myriad of Orwellian controls, and a myriad of Bradburyian distractions (video walls almost upon us), but it is the way things are. Capitalism takes raw materials, and labor, and combines them to create a product that can be sold. The greater good in Capitalism is profit, and for the creation of that profit, a poorer lass of people are necessary to provide cheap labor (to keep costs down and profits up). And it needs a healthy mobile workforce for it to work (hence we are better off than the serfs of earlier systems). Our whole society is crafted to create the appearance of choice, it is a 'you can do anything you want to' Walt Disney society, which at the same time ignores the deep social problems that are guaranteed to ensure people can't have that freedom at all. If there was no poor, then who would make the goods?

The educational system ingrains in us sets of beliefs that society wants us to have, and these are convenient for society, but not necessarily for the individual. Those who do not go along with society are classed as deviants, and society tries to help them, and if they cannot be helped it shuns them. Back to the analogy of someone who doesn't work, it is assumed that he or she would be terribly bored and needs direction and structure in their lives. Even if that individual had their own sense of purpose, and plenty with which to fill up his or her day, others would not be able to recognize it, because society has taught them a system of beliefs in which the only purpose an individual should have; is to work.

The danger is that most people believe that their common sense is an innate sense of reality. Common sense represents, common reality; the consensus of the mob, it is a form of mass hallucination.

From this one example we have seen the power of skepticism to unravel commonly held beliefs. We may not necessarily be able to do anything about them, we may even have to unravel our own beliefs, and then unravel our own beliefs about our beliefs. But at least we know that it is an illusion, we can draw comfort from the widom being skeptical grants us. Skepticism, scrapes away at everything like an archaeologist brushing fine layers of soil off of a piece of pottery. It is an important and useful tool in the Philosopher's armory.

Rationalism Vs. Empiricism

A rundown of epistemology in modern philosophy.
Europe's 16th century was a time of great religious upheaval and disorder. Ardent opposition to modifying medieval customs resulted in the Thirty Years War, strict limitations for learning, and significant resistance to new ideas. Monumental discoveries, like those of Copernicus, Kepler and Galileo were swept under the table because they delivered a crushing blow to all geocentric views--the view preferred by the church. The curiosity of such men, however, must have been contagious--for emerging from the flames of the Reformation, was the individual: eager to learn and willing to doubt. The philosophy of the 17th century Frenchman René Descartes, which culminated with his momentous cogito, ergo sum, embodied the pervasive intellectual transformation of the era. For Descartes, everything was open to doubt except conscious experience, meaning the common denominator for all of our knowledge is ourselves. Re-emphasizing the importance of the individual hearkened back to Socrates who claimed we should know ourselves before tackling loftier metaphysical issues--and this is exactly the turn philosophy then took, which is why Descartes is often considered the "father of modern philosophy."
In trying to better understand the self, it seemed natural for Descartes and succeeding philosophers to study the nature of knowledge in attempt to understand how it is we know what we know. This branch of philosophy is called epistemology and it replaced metaphysics and ontology as the most fervently pondered sect of philosophy during this time. The history of epistemology has since seen some of its most prominent characters divided into two categories: the rationalists and empiricists.
In accepting our existence as our founding axiom, Descartes argued that we are able to then syllogistically build more advanced truths by deducing from already established truths--that all possible knowledge of the world is inherently available, regardless of our relationship with the world itself. This is essentially what rationalists believe. Rationalists believe that knowledge is inborn and is simply waiting for us to seek it out. Descartes, along with Spinoza and Leibniz are typically considered the "Continental Rationalists." Plato's universals and particulars are an example of classical rationalism. For Plato we are able to conceive the perfected version of something, a triangle for instance, despite never encountering one in the external world--while our senses may fool us into believing we are experiencing a perfect triangle, there is likely to always be an out of place molecule or interference making it imperfect. The reason we are able to conceptualize a perfect triangle without ever experiencing one with our senses is that we have mentally tapped into the realm of "universals", which holds the perfect form of all things. 'Particulars' are the mere imperfect adaptations of universals which we experience in the material world. Plato explains these ideas beautifully in his "allegory of the cave" in The Republic.
It follows then that empiricism, the seemingly diametric opposite of rationalism, holds that it is sensory data rather than inborn mental faculties that create knowledge. The spirit of the doctrine is expressed in Aristotle's claim: nihil in intellectu quod prius non fuerit in sensu: there's nothing in the intellect that wasn't first in the senses. Where Plato was a romantic, Aristotle was a scientist; he felt that it is imperative that we trust our senses, for what else have we to trust? It was this sort of ideology that penetrates both Aristotle's philosophy and empiricism. It was the English philosopher John Locke in the 17th century, however, that spearheaded the empiricist movement in modern philosophy. There is a significant correlation between both Aristotle and Locke's exposure to science and their empirical views. Aristotle, unlike Plato, was extremely involved with science and studying things of the world; Locke was living in the crest of the Scientific Revolution. Locke's An Essay Concerning Human Understanding was a blatant rejection of Descartes rationalist views. It was in this essay that Locke declared that in its natural state, the mind is a tabula rasa, or a blank slate. This is the fundamental empirical view, that all of our knowledge is derived exclusively from experience, or a posteriori; although there it is much debate between empiricists as to how exactly our perception of an object is transformed into knowledge.
Other notable empiricists following Locke were Bishop George Berkeley and David Hume. Berkeley was a subjective idealist, stating we can only know our own mind and ideas. He believed that objects exist only insofar as someone is able to perceive them; and that God was the eternal perceiver--therefore assuring us that objects still exist when humans are not perceiving them, and that trees do in fact make noise when no one is around to hear them fall.
David Hume, often dubbed "the ultimate skeptic" argued that since we can only know things inductively, we are unable to detect the effect in the cause. While we have habits, for Hume, we aren't actually able to know anything. Locke, Berkeley and Hume became known as the "British Empiricists."
Hume's proclamations were so drastic that he disturbed the "dogmatic slumbers" of a frail, gentle man who also later became recognized as having one of the greatest minds of all time. Immanuel Kant's newfound inspiration led to his completion of his Critique of Pure Reason which, among other things, bridged the gap between rationalism and empiricism. Kant believed that knowledge begins with experience, but that the experience awakens our innate mental faculties. Our senses and our mind are both active agents in creating knowledge: "Thoughts without contents are empty; intuitions without concepts are blind.”
I adhere, as I assume most post-Kant thinkers do, to Kant's merging of the two views. I know it is possible for my senses to be tricked, by optical illusions or what have you, but I still believe that can be accounted for and that we should trust them, we just shouldn't deem them to be the sole basis of all knowledge. With no raw data, I do not think we should participate in speculative metaphysics--we cannot rationally prove something like the existence of God without empirical evidence--a belief in God requires a leap of faith and has no business involving itself with science or logic (not to say that science and logic don't require a leap of faith themselves.) Today there is a general respect for science that would make an empirical view easier to hold with confidence; but the mind is far too mysterious and its power too riveting. When I look at a painting, I'm looking at millions of tiny droplets of color, yet my mind is able to decipher these into discernible figures and shapes. Mathematics is a good way of showing how both rationalism and empiricism are both important components of knowledge. Some rationalists believe that we are able to understand something like the number three as being prime and greater than two a priori. This is something I disagree with; numbers are not things in themselves. I believe they are man-made constructs that we use to more conveniently operate and organize our empirical perceptions. Once we are able to fathom this established concept of numbers, however, our minds are then able to rationally create and understand numerical patterns--using these abstractions which we know to already be accepted as true, we are able to use deduction the way Descartes proposed to create new, unique truths. It is in this manner that Sir Isaac Newton had the ability to sit down and invent calculus: he didn't find it buried under a rock; he created calculus by combining his previous knowledge with his innate mental faculties.
It is too far fetched to believe the mind is an inactive participant in the production of knowledge, and to be honest, I probably would have had to live in the 17th century to understand why this conclusion took so long to manifest itself through Kant. In football, say, when a quarterback throws a touchdown to his wide receiver, there is no debate as to which player was the active participant it is simply accepted that both players are responsible for the successful maneuver. While my example may be a bit lousy, if you ask me, the rationalist/empiricist dichotomy is no dichotomy at all the senses and the mind are both undoubtedly necessary.

Rationalists Vs. Empiricists: The Question of Innate Ideas

By; Matthew Ryan

Two major schools of thought in philosophy are empiricism and rationalism. Adherents of empiricism include the famous philosopher David Hume and to a lesser extent George Berkeley. Rationalism, on the other hand, was supported by such men as Rene Descartes and John Locke.
The central tenet of empiricism is that what an individual knows must be grounded in sensory experience. Color, shape, sound, and number are all derived from sensory perceptions. And, according to empiricists, that is all we can ever hope to have knowledge of or study. The intent of empiricists is quite benign: it arose as a reaction to Cartesian dualism, a theory that supported a separate external world about which we could have no knowledge. Empiricists cut off this mysterious external world from study. Similarly, they did away with Plato's Theory of Forms and other such extraneous concepts. For an empiricist, the famous question "How many angels can fit on the head of a pin" is a useless inquiry without an answer. So too, are inquiries which are not based strictly on sensory experience.

Rationalism vs. Empiricism

Theories of knowledge divide naturally, theoretically and historically into the two rival schools of rationalism and empiricism. Neither rationalism nor empiricism disregards the primary tool of the other school entirely. The issue revolves on beliefs about necessary knowledge and empirical knowledge.

1. Rationalism

Rationalism believes that some ideas or concepts are independent of experience and that some truth is known by reason alone.

a. a priori

This is necessary knowledge not given in nor dependent upon experience; it is necessarily true by definition. For instance "black cats are black." This is an analytic statement, and broadly, it is a tautology; its denial would be self-contradictory.

2. Empiricism

Empiricism believes that some ideas or concepts are independent of experience and that truth must be established by reference to experience alone.

b. a posteriori

This is knowledge that comes after or is dependent upon experience. for instance "Desks are brown" is a synthetic statement. Unlike the analytic statement "Black cats are black", the synthetic statement "Desks are brown" is not necessarily true unless all desks are by definition brown, and to deny it would not be self-contradictory. We would probably refer the matter to experience.

Since knowledge depends primarily on synthetic statements -- statements that may be true or may be false -- their nature and status are crucial to theories of knowledge. The controvercial issue is the possibility of synthetic necessary knowledge -- that is, the possibility of having genuine knowledge of the world without the need to rely on experience. Consider these statements:

1) The sum of the angles of a triangle is 180 degrees.

2) Parallel lines never meet.

3) A whole is the sum of all its parts.

Rationalism may believe these to be synthetic necessary statements, universally treu, and genunie knowledge; i.e., they are not merely empty as the analytic or tautologous statemenst (Black cats are black) and are not dependent on experience for their truth value.

Empiricism denies that these statements are synthetic and necessary. Strict empriicism asserts that all such statements only appear to be necessary or a priori. Actually, they derive from experience.

Logical empiricism admits that these statements are ncessary but only because they are not really synthetic statements but analytic statements, which are true by definition alone and do not give us genuine knowledge of the world.

GENUINE KNOWLEDGE

Rationalism includes in genuine knowledge synthetic necessary statements (or, if this term is rejected, then those analytic necessary statements that "reveal reality" in terms of universally necessary truth; e.g., "An entity is what it is and not something else.")

Empiricism limits genuine knowledge to empirical statements. Necessary statements are empty (that is, they tell us nothing of the world).

Logical empiricism admits as genuine knowledge only analytic necessary (Black cats are black) or synthetic empirical statements (desks are brown). But the anyalytic necessary statements or laws of logic and mathematics derive from arbitrary rules of usage, definitions, and the like, and therefore reveal nothing about reality. (This is the antimetaphysical point of view).

Nominalism / Realism

Nominalism and realism are theories related to epistemology (the study of knowledge). Both positions are anchored in their approaches to the concept of universals.

Realism is the view that there are universals that are related to but exist apart from thoughts and individual objects in our world. Consider two white horses for example. Realism asserts that there is a universal concept of “whiteness” which the two white horses share. Plato’s ‘theory of forms’ is the classic representation of realism. Plato argued that for every object in the physical world, there is a more perfect ‘form’ or ‘idea’ that exists in another realm. Plato never did identify where these forms or ideas existed. The fifth century theologian, Augustine, however, modified Plato’s realism by stating that universals exist in the mind of God.

Nominalism, on the other hand, asserts that reality only exists in particular objects. Universals, therefore, have no reality apart from objects. Thus, there are no concepts like “whiteness” that exist in another dimension. Two objects like horses or rocks may share “whiteness” but this whiteness is located in particular objects and not in some independent concept of “whiteness” that exists somewhere else. Forms of nominalism can be found in the views of William of Ockham, George Berkeley, and David Hume.

The concepts of nominalism and realism were an important part of medieval philosophy and theology. In general, the time period of 1200–1350 was a period in which realism was held. The time period of 1350–1500 was dominated by nominalism.

Related Book From Amazon

Thursday, January 1, 2009

Philosophy

Philosophy is the study of general problems concerning matters such as existence, knowledge, truth, beauty, justice, validity, mind, and language. Philosophy is distinguished from other ways of addressing these questions (such as mysticism or mythology) by its critical, generally systematic approach and its reliance on reasoned argument.[3] The word philosophy is of Ancient Greek origin: φιλοσοφία (philosophía), meaning "love of wisdom."

  • 1 Branches of philosophy
  • 2 Western philosophy
    • 2.1 History
      • 2.1.1 Ancient philosophy (c. VI B.C.–c. IV A.D.)
      • 2.1.2 Medieval philosophy (c. V A.D.–c. 1400)
      • 2.1.3 Renaissance (c. 1400–c. 1600)
      • 2.1.4 Early modern philosophy (c. 1600 – c. 1800)
      • 2.1.5 Nineteenth century philosophy
      • 2.1.6 Contemporary philosophy (c. 1900 – present)
    • 2.2 Main Theories
    • 2.2.1 Realism and nominalism
      • 2.2.2 Rationalism and empiricism
      • 2.2.3 Skepticism
      • 2.2.4 Idealism
      • 2.2.5 Pragmatism
      • 2.2.6 Phenomenology
      • 2.2.7 Existentialism
      • 2.2.8 Structuralism and post-structuralism
      • 2.2.9 The analytic tradition
      • 2.2.10 Moral and political philosophy
        • 2.2.10.1 Human nature and political legitimacy
        • 2.2.10.2 Consequentialism, deontology, and the aretaic turn

 

    • 2.3 Applied philosophy
  • 3 Eastern philosophy
    • 3.1 Babylonian philosophy
    • 3.2 Chinese philosophy
    • 3.3 Indian philosophy
    • 3.4 Persian philosophy

See more Details at

Wikipedia