Monday, December 31, 2007

Sufficient vs Necessary

"A is sufficient to make B true" means as long as A, then B, regardless of any other contingency.
"A is necessary for B to be true" means for B to be true, then A. Other contingencies could still make B untrue, however.

A statement can be necessary AND sufficient, which make such statements of primary importance.

For example:
You're going to the fair. It costs 50 cents to get in.

It is SUFFICIENT to give the ticketmaster 50 cents to get in. I suppose we're ignoring cases where you're on America's Most Wanted and he recognizes you, thus preventing you from entering... so technically speaking, this is not a perfect example.

It is sufficient for a closed shape to be a triangle if it has three sides. It is also necessary.

Hmm... I want a perfect example of something that is just sufficient, however...

Sunday, December 30, 2007

Occam's razor: is it valid?

Occam's razor is applied to a situation in which two competing theories both account for all observable data; it eliminates the more complex one in favor of the more simple. For example, dualism must be discarded in favor of monism, because they both can fully account for existence.

Ontologically, this tool is a nightmare. It is kind of a mute acceptance that there IS NO WAY to know whether one theory is more true than the other.

Occam's razor is also an exercise in pragmatism. Recall that pragmatism is a method that examines the practical consequences of assuming an idea to be true. Since ontologically there can never be certainty, we should consider the fact that people need a systematic way to understand the world. Therefore it would practically be of use to them if we were to eliminate needlessly complicated theories.

Unfortunately it is a tool that must be applied when applicable. That's always a question of debate, however: when is it applicable? When two or more theories account for ALL observable data. If one theory BETTER ACCOUNTS for observable data than another, then that theory is obviously preferable; Occam's razor need not be applied.

I was wondering; what if two theories successfully account for 90% of the data? Obviously something's off, and you should probably do more observations until you can get either really close to 100% or change one of the theories so that it can account for the remaining 10%.

This tool is both ontological and epistemological, because it can apply to theories about the process of attaining knowledge as well as to theories about the world as we believe it to exist.

I think I ought to give an example of using Occam's razor in an epistemological situation, just to cement the title of my blog a little bit more.

The two main theories for how knowledge is attained are:
1) empiricism (knowledge derived from sense experience, associated with "Blank Slate" and Locke)
2) innate idea (ideas present at birth, associated with idealism and Plato)

Unfortunately I don't think these ideas are mutually exclusive (nature and nurture, anyone?). Furthermore, it seems that there are instances for both that problematize them. The two theories must satisfyingly account for all observable data before Occam's razor can be applied.

Darn, that didn't work out. Oh well, at least I tried.

epistemology, part 2

I think I actually am a hypocrit, with respect to the title of my blog. If you go back to my original post, you will see the significance of it in relation to my beliefs, in the sense that all things that are not as certain as that one thing Descartes showed (Cogito ergo sum) ought be put into a different category for consideration and debate.

Allow me to clarify:
ONTOLOGY that we can be ABSOLUTELY CERTAIN about must be restricted to that one conclusion Descartes made.
ONTOLOGY actually covers more than that, however. I can imagine the world as being comprised of existence in itself, giving rise to my sensations, etc. This is an a posteriori conclusion that comments upon ontology, but it assumes that EMPIRICISM is a valid tool to make conclusions. The conclusion Descrates made does not require this empiricism, and so is the stronger conclusion.

I was tempted to say that because empirical conclusions are less certain, a healthy amount of skepticism could never hurt, and so I would be justified in thinking about empirical conclusions in the larger frame of EPISTEMOLOGY. Ontological conclusions outside of Descartes's cogito ergo sum are not certain, but at least I could investigate WHY I feel so certain of some empirical conclusions and not of others.

So you see, I divided the universe into absolute ontology and epistemology; epistemology encompassing all empirical thought. I am a hypocrit because sometimes I made an ontological query and used empirical logic without considering the larger epistemological framework I wished to apply in the beginning.

From now on, I will do my best to rectify my errs, and not be a hypocrit.

Mind-brain identity theory

This is a somewhat ontological topic, so in one respect it shouldn't belong in my queries.
However, I have successfully snuck around being a hypocrit because I suggested that in order to be able to answer ontological questions, I could assume an empirical method. Otherwise I'm just stuck at Hume's roadblock.

The mind-brain identity theory suggests that minds and brains are NOT two different things; but rather, that all references to these are really to brains and brain states.

This theory stands in favor of a monistic materialism and in opposition to dualism. Can it be empirically argued for? Not really, since to prove this would be to disprove dualism, which can't be done since spiritual reality cannot be empirically observed.

I can use Occam's razor to get rid of the spirit world, though, but I haven't thought about Occam's razor in depth. So what else can I talk about here?

It is related to eliminative materialism, which predicts that at some point in the future, language will change to more accurately represent what is going on in the brain. Instead of saying "I am happy" you might say "I just had a bunch of endorphins released in sectors A4, B95..."
(This is kind of an extreme case, but the idea is as science moves forward, the gap between sensation and corresponding reality will become smaller). If the eliminative materialists are right, dualism would eventually become less popular of a perspective, less intuitive, and to de facto eliminated.

However, if I were to approach this subject from the same level of skepticism as I have approached, say, the existence of God, then I would be forced to conclude that as this is merely a theory, there will never come a point where we can for certain say one system is true over another.

Saturday, December 29, 2007

Pragmaticism and Pragmatism

I was reading a thematic overview of philosophy by Donald Palmer and came across a really nice, friendly definition of pragmaticism/pragmatism I thought I should share.

PRAGMATISM was coined by Charles Peirce. It is a method used to clarify thought processes by tracing out the practical consequences of beliefs in various ideas. For example, postulating the existence of a loosely defined ultimate cause is an unprovable theory that will have no effect on your actions in daily existence. No practical consequence; so dismiss the question. Hey! We just used Peirce's pragmatism!

However, there's a little story about this word, which is important because its meaning changes. Stories are so nice because they help you remember better, too! It goes like this:

A man named William James used this word to make a new argument in favor of religious belief. Peirce was pretty upset about how James used it, especially since he wanted to use it to void questions concerning religion, rather than reinforce a particular stance in the controversy. So Peirce changed his word to "PRAGMATICISM" which he thought was "ugly enough to be safe from kidnappers." To this day pragmatism is now associated with James's use of it, and pragmaticism is used for Peirce's.

I'll just quote James who says: "On pragmatic principles, if the hypothesis of God works satisfactorily in the widest sense of the word, it is true." As long as it "will combine satisfactorily with all the other working truths" this hypothesis seems to "work" pretty well. Subjectively true, then; not rigorously so, but conveniently so.

I'm still a bit confused about the distinction between Peirce's pragmaticism and James's pragmatism, however. As I see it, James successfully used Peirce's method to show something Peirce didn't like. That would imply that Peirce's new term is incompatible with the conclusion James made in the preceding paragraph.

Here's a quote from wiki, which suggests the terms are actually interchangeable:
"Whether one chooses to call it "pragmatism" or "pragmaticism", and Peirce himself was not always consistent about it even after the notorious renaming, his conception of pragmatic philosophy is based on one or another version of the so-called "pragmatic maxim"

To conclude, I should also mention that I initially decided to discuss these two words because I thought they were related to pragmatics; THIS IS NOT TRUE.
Pragmatics is not related to pragmatism/pragmaticism, except for the fact that they share a common root.

Here's that root, complements to thefreedictionary.com:

[Latin prgmaticus, skilled in business, from Greek prgmatikos, from prgma, prgmat-, deed, from prssein, prg-, to do.]

If you're interested on my take on pragmatics, please refer back to my previous post on that. As I said before, it's been giving me a headache, so if anyone would care to take a shot at explaining it, I'd appreciate it.

Friday, December 28, 2007

Agnosticism vs. Atheism

I have had difficulty deciding whether I should label myself as an agnostic or an atheist. The problem, of course, lies in definition. Technically speaking, I am an agnostic. I see it as the only rational choice.

Here's the religous story of my life. I was raised in a nonreligious environment and found no reason to believe in God. Science seemed to do a good job of answering my questions and I didn't really see a reason to delve any further. As I learned more science, my beliefs were cemented even further. At this stage of my life, I was an atheist (I would have denied the existence of God).

Then I transitioned to agnosticism when I learned that scientifically speaking, everyone should be an agnostic (there is no absolute proof for or against God's existence). Even if there was scientific evidence for God, his existence would still just be a THEORY, just like the theory of evolution... which is still called a theory, even though the evidence has reached a point that if you round a little, it's pretty much fact. My point being, stating that the existence of God is a theory would imply that there is SOME level of doubt, however small.

Then a wavy bit came. I started to consider that if I pretended to believe in a Christian God, my life would be more complete; I'd be happier. This is the closest I came to believing. This is a bit of a cheat because I was divying myself up into an "emotional" bit and a "rational" bit, a tactic that I have become increasingly aware of as I become older. At this stage I would have denied any affiliation whatsoever; I was still trying to make up my mind.

Anyways, the rational bit won after a bit of a struggle. What ended the conflict was reading Dawkins's God Delusion and Life of Pi. Life of Pi suggests that one should believe whatever one prefers (and thus supported my belief that being Christian just to be happier is a good enough reason), but in the end Dawkins's firm statements had a more profound effect on me. Furthermore, I think Life of Pi was promoting existentialism; religion wasn't being taken too seriously at all.

However, one statement in Life of Pi had a serious effect on me. It was along the lines of "choosing agnosticism as a way of life is like choosing immobility as a means for transportation." It stuck, and I felt compelled to choose between atheism and theism. Dawkins also expressed annoyance at agnostics, so reading God Delusion didn't help me there, either.

So I decided that I was an atheist. But after thinking a lot about philosophical arguments, and definitions, I finally changed one more time, back to agnosticism, and I sincerely doubt that I will change again in my lifetime. I've simply thought too hard about this, and believe I have come to the most sound, rational, conclusion. So let's see... here are the terms.

ATHEIST: somebody who does not believe in God or deities
AGNOSTIC: somebody who believes that it is impossible to know whether or not God exists

Next, let's define God. I'm talking about ultimate cause God, not necessarily the Christian one. An eternal being that created/started everything, or is the totality of reality, etc.

Postulating the existence of such an eternal being is not even a theory, it's a guess. Anything eternal cannot be measured; human beings are limited, mortal beings. There's no scientific evidence that can point either way... infinity is simply too far off from the limited length of time we can infer actually happened, which is just the age of the universe (what was it, about 11 billion years?). If we found a way to go back to the big bang and hit a time wall, it wouldn't prove that an ultimate cause started it. Heck, it might've been us hitting the time wall!

My point being, this doesn't even approach what we can conceive as being an experimentally provable theory. Empirically, it's nonsense (as Hume would have said). I see the problem of God having to do with the extensive connotation of the word, which has deep roots in the history of moral conduct. It is hard to reject this thing completely, because by being nice and decent human beings in the U.S., we necessarily participate in a Christian moral scheme.

Anyways, to wrap up this post... a loosely defined eternal being might or might not exist; there is simply no way to prove it either way unless you define it a bit better with respect to time and such. With respect to this kind of God, I am a true agnostic.

With respect to a Christian God, it is different. Consider the following two theories:
"The God that is described in the New Testament does not exist."
"Evolution results in the creation of new species from common ancestors."

Personally, I think an empirical thinker should come to the conclusion that a lot of evidence supports both of these theories. The fact that they're theories does mean a little bit of doubt is involved, however. This is a lot less doubt than is involved for the loosely defined God, as mentioned earlier.

So that settles it. I am an agnostic. Apologies if I offended anyone... but really, Dawkins is much harsher than I am...

Panpsychism

(FYI, quotes taken from http://www.iep.utm.edu/p/panpsych.htm)

I used to think about an idea I would call "global consciousness" and then found that there's a term called "panpsychism" which is similar in a number of ways.

First: What is panpsychism?
"panpsychism may be defined as the view that all things possess mind, or some mind-like quality"

It stands in opposition to the belief that only a certain restricted class of beings can possess mind. Also, "Either mind was present in things from the very beginning or it appeared (emerged) at some point in the history of evolution." Believing the former means you're a panpsychist, believing the latter makes you an emergentist. Defending emergentism is generally more problematic because one would have to argue for a line at which you could rigourously define the mind.

If you believe in Darwinian evolution, you would probably agree with me that arguing for such a line is not very useful and that it is clearly a better idea to just think of things in terms of always having possessed some simpler mind-like quality.

Second: What is mind?
"it is clearly debatable what one means by “mind.” Panpsychists have employed a variety of descriptive terms to articulate the mental quality that all things share: sentience, experience, feeling, inner life, subjectivity, qualia, will, perception"

To address the question of mind, I will appeal to structuralism. Having a mind is equivalent to having a specific set of atoms organized in a specific structure. This explains the continuum of varying complexity of minds that human experience seems to encounter (including cats, dogs, cnidaria worms, toddlers, infants, etc.) Different structures mean different experiences.

Third: Why is it important?
"Panpsychism, with its long list of advocates and sympathizers, is a robust and respectable approach to mind. It offers a naturalistic escape from Cartesian dualism and Christian theology"

"Panpsychism thus offers a kind of resolution to the problem of emergence, and is supported by several other arguments as well. The viability of panpsychism is no longer really in question. At issue is the specific form it might take, and what its implications are. Panpsychism suggests a radically different worldview, one that is fundamentally at odds with the dominant mechanistic conception of the universe. Arguably, it is precisely this mechanistic view—which sees the universe and everything in it as a kind of giant machine—that lies at the root of many of our philosophical, sociological, and environmental problems. Panpsychism, by challenging this worldview at its root, potentially offers new solutions to some very old problems."

Thursday, December 27, 2007

Hume's critique of causation

This is what makes a theory a theory. Just because we observe event B happening EVERY time after event A happens, doesn't mean that event B has to happen.

Example: if billiard ball A hits billiard B, billiard B will move.
The only reason we expect billiard ball B to move is because of precedent. We don't KNOW for sure if it will continue to do so in the future.

This is where I draw the line and say statistics & scientific method can be used. Just for the sake of having stuff to build up arguments with (and practical ones, for that matter).
Construct a hypothesis: If A, then B.
Test hypothesis: Let A happen in 100 independently run experiments.
If hypothesis is shown to be correct at least 95% of the time, it is statistically significant. If it happens 100% of the time, then you're even better off.

Note that this is NOT refuting Hume, it just posits a method for making conclusions using a different method. Course, some would argue that this departs from philosophy and enters the realm of science. After all, how do you construct 100 independently run experiments concerning the nature of reality?

I think there might be a way. Someone just has to be creative.

This sciencing up of philosophy is called "logical positivism," and is by no means original.

Wednesday, December 26, 2007

Genealogy of concepts

Genealogy may be defined as "a patient tracing of the descent of authoritative discursive practices that structure the application of power to bodies and subjects."

What does this mean? It means that a genealogy functions as a "denuding, unmasking, stripping away pretensions of universality and merely self-serving claims to spirituality"

Example: Nietzsche's On the Genealogy of Morality demonstrates that morality is nothing more than the "development of a special set of particularly pragmatic 'prejudices' of an unusually downtrodden lot."

Genealogies allow us to realize that morals shouldn't be ultimately JUSTIFIED, but merely ACCOUNTED FOR. They are just social phenomenon. Such a view is demystifying, in a sense, because if you ultimately try to justify something, it doesn't get you anywhere. There is no solid absolute principle that underlies everything... at least not that we've discovered yet. (I'm assuming we're all secular, by the way)

I would love a more scientific approach of ACCOUNTING FOR morality; more scientific than Nietzsche's historical and social psychology, anyways. But morality is too big to control. You can't just start a set of 50 societies over from year 2500 BC and see what would happen. History and social psychology are as close as we can come.

It seems to me the main thing a genealogy needs is an argument to debunk-- specifically, an argument in favor of a universal concept. For example, I could focus on the idea of existentialism and how it's changed from say Kierkegaard to Beckett. Or Nietzsche to Beckett.

Once the concept is found, a difference is pointed out (with historical data). Then the difference is described; Nietzsche did this with social psychology and etymology.

Accounting for the morality of today is the ambitious task that one would pursue if one should wish to extend the Genealogy of Morality. Nietzsche suggested only one break of note in his genealogy, so I would be required to identify a second break; one that may have happened after him. The break of existentialism, perhaps? It is, after all, something that appeared as a reaction to Nietzsche. Such a break could also be analyzed in terms of Nietzsche's conclusions at the end of Genealogy of Morality.

That's a decent thesis, then. Reading Beckett's Trilogy and critics' interpretations of it as evidence for existentialism surfacing in response to a break that occurred at some point in history AFTER the good/evil and good/bad break.

Tuesday, December 25, 2007

Deterritorialization, Reterritorialization

Territory is a geographic term implying distinct boundaries, and it also suggests an element of control. Deterritorialization was originally brought up in Deleuze and Guattari's "Towards a Minor Literature" as a way to describe how Khafka's works influenced German language as a whole (since he was not part of mainstream, he worked against it) which brings into play the "de".

To be more specific, deterritorialization is the erasing of boundaries that rigidified German language. Minor literature deterritorialized the German language.

"Re" territorialization seems to infer an opposing process that would rigidify an identity. Mainstream German language might "reterritorialize" itself through the censorship of minor literature and emphasis of another avenue of literature that is more mainstream. However, it gets confusing because these two processes go hand in hand. What was once "deterritorializing" leads inevitably to the rigidifying of a new identity. Deterritorialization is followed by re-territorialization, followed by more deterritorialization.

Why is this important?
The point is that there are moments of rigidity, and moments of becoming. Moments of becoming are naturally more exciting, and are what Deleuze and Guattari want to focus on. Delimit thought and such. Does that make sense?

Schizoanalysis

Coined originally by Guattari and Deleuze, who spliced the front off of Schizophrenia and stuck it in front of analysis. Defined as "the analysis of the incidence of Dispositions [agencements] of enunciation upon semiotic and subjective productions, in a given problematic context."

I get the feeling that Guattari and Deleuze are not that precise, because a lot of terms they use, and the language that they employ, are all just variations of the same point. They are trying to transform convention... make a thing that is very possibly ugly and monstrous, and yet can peform a function. I think it's only ugly and monstrous because the thought and logic used in its construction are so alien to convention. For after all, they are combined together at the bidding of a random thought.

We have to train ourselves to not be afraid of such things... because we really don't KNOW that they're bad until we've tried them.

"The point is that a rhizome or multiplicity never allows itself to be overcoded, never has available a supplementary dimension over and above its number of lines" -- a rhizome is the opposite of a reduction, it is an expansion without limit and does not restrict its lens to any specific window. The rhizome is a tangled mass of connections that extends infinitely in all directions.

No limits!

Pragmatics

So I came across this term often when I read about theory, and it has always given me a headache. I recall looking it up again and again, and yet I can never quite get it to stick in my head. If you want to try, check this link for wikipedia:
http://en.wikipedia.org/wiki/Pragmatics
or for a more succinct summary of its denotation:
http://www.thefreedictionary.com/pragmatics

Huh! And here's a quote from wiki that justifies my confusion with the term:
"Pragmatics is regarded as one of the most challenging aspects for language learners to grasp, and can only truly be learned with experience."

This reminds me of words such as "deconstruction" and "Derrida"... not to mention "Deleuze" and his "Rhizomes" (I was just reading A Thousand Plateaus when this term came up).

I am a bit annoyed by all the confusion these words create. These words are enigmas, riddles demanding to be solved. How do you succinctly summarize them? It can't be done. They are nuanced, distant, foreign. The world's most gifted intellectuals are driven to study these things until their brains hurt, and it requires a dispension of common sense (or rather, conventional sense). When they can talk about it to each other and agree, then it seems that indeed some kind of understanding was achieved. But of what practical use was it?

The problem is that not enough people obtained this understanding. It is at odds with most conventional forms of understanding. In essence, these intellectuals have worked very hard to break down all the assumptions that made their lives functional with others, and created a new system that is only compatible with a very small number of people. Less can be achieved as a result. This is the same thing that made Marxist philosophy so powerful: it could be dumbed down to reach a HUGE number of people. It changed the face of the world. Very PRACTICAL idea, communism, even if it didn't actually work. We at least know that it doesn't work because it had an EFFECT.

Anyways, this is just an expression of my annoyance. Let me examine the term pragmatics itself, so that I don't forget it again.

It's an "ology" really. Just an other "study of" something. More specifically, it studies connotations-- but it is more encompassing than the connotation of one word. It examines the meaning that language can convey under varying circumstances.

Kind of silly, in a way, when you consider that connotations can change significantly from person to person, from one time period to the next, if you're holding a gun or not, etc. Of course it's important when you go into a room filled with people belonging to a culture completely foreign to your own. (If the fremen of Dune spit at you, it's good)

So it might seem that I've done a good job of cementing in the basic meaning of the word. Of course, it has roots deep into philosophy, linguistics, sociology, so I've really just scratched the surface. Just like having a short description of history as "records of the past" doesn't quite do justice what's being done in this vast field.

This world is getting bigger and bigger, after all.

Epistemology

This is my first time doing a blog, and since there's a lot of negative connotation surrounding these things, I should add that I am a bit hesitant, reluctant; call it what you like.
But connotations are silly things, that, while useful at times, should not limit the boundaries of practice. So here I am, practicing.

The title of this first entry will explore the choice that was made in the choosing of the title of this blog... epistemological queries. Why do I think an epistemological query is more useful than an ontological query?

I guess I have to backtrack and think about Descartes for this one. Descartes is that famous French philosopher who "cleared all the rubble" of previous philosophies and examined what he could be sure of based on his own experiences, more than anything else. Pretty much, he sorted out statements that could be said about the universe into distinct piles of NECESSARILY TRUE and only the seemingly true.

He said "Cogito ergo sum" which means I think therefore I am. This is the only completely foolproof statement that fits neatly into the necessarily true pile. Everything else we think is true about the universe could actually just be an illusion. Now he made some conclusions after that (many of which I disagree with), but this is the only statement which I wish to examine at the moment.

This statement is the only one that fits in the first pile. All we can be sure that exists is thought. Sensory perceptions, too, but I think those are types of thoughts. After all, your brain can make them up while you dream. Thus ontology, or the study of the world as it is, stops here. The rest of the world is in the other pile, of which we can only suppose these things to be true.

So epistemology is the study of how we know what we know-- and here, I am talking about relative knowingness, because we can't know the things that sit in the second pile to the same extent that we do of things in the first pile. Statements in the second pile do not have the absolute certainty that characterizes "Cogito Ergo Sum." These statements vary from person to person.

This is an interesting thing to think about... and this brings me back to the title of this entry, and the blog. Epistemology is more useful than ontology. With regards to what actually exists, all I can say for certain is that thought does; to be specific, my thought. Therefore my philosophical meanderings shall focus on the epistemological "truths," which are just processes by which my thoughts form.