Saturday, February 28, 2009
1.In any consistent axiomatic system (formal system of mathematics) sufficiently strong to allow one to do basic arithmetic, one can construct a statement about natural numbers that can be neither proved nor disproved within that system
2.Any sufficiently strong consistent system cannot prove its own consistency.
Gödel's theorems are theorems in first-order logic, and must ultimately be understood in that context.
The existence of an incomplete system is in itself not particularly surprising. For example, if you take Euclidean geometry and you drop the parallel postulate, you get an incomplete system. An incomplete system can mean simply that you haven't discovered all the necessary axioms.
What I mean is that if you are starting from incomplete information, you are surely going to arrive to incomplete results. Gedel bases his calculations in first order propositional logic which is certainly not enough to prove that a simple math system is complete.
While there is still debate among attention scientists, most now conclude that there are three types of attention. The first is orienting — the flashlight of your mind. In the case of visual attention, it involves parts of the brain including the parietal lobe, a brain area related to sensory processing. To orient to new stimuli, two parts of the parietal lobe work with brain sections related to frontal eye fields. This is what develops in an infants' brain, allowing them to focus on something new in their environment.
The second type of attention spans the spectrum of response states, from sleepiness to complete alertness. The third type is executive attention: planning, judgment, resolving conflicting information. The heart of this is the anterior cingulate — an ancient, tiny part of the brain that is now at the heart of our higher-order skills. It's executive attention that lets us move us beyond our impulsive selves, to plan for the future and understand abstraction. We are programmed to be interrupted. We get an adrenalin jolt when orienting to new stimuli: Our body actually rewards us for paying attention to the new. So in this very fast-paced world, it's easy and tempting to always react to the new thing. But when we live in a reactive way, we minimize our capacity to pursue goals.
Our society right now is filled with lovely distractions — we have so much portable escapism and mediated fantasy — but that's just one issue. The other is interruption — multitasking, the fragmentation of thought and time. We're living in highly interrupted ways.
Studies show that information workers now switch tasks an average of every three minutes throughout the day. Of course that's what we have to do to live in this complicated world. When your thread of thought is lost frequently it's hard to go deeply into problem-solving, into relating, into thinking. These are the problems of attention that had worsen in our new world. Gadgets and technologies gave us extraordinary opportunities, like the potential to connect and to learn. But they can also be detrminental and contribute to undermine our power of attention.
We could, if we chose, use our technologies to liberate us from distractions and task-switching, and use them to maximize our ability to reduce stress, concentrate, pay attention, and use our creative skills. Put the phone on Do not Ddsturb, let voicemail take messages, reply to your calls and email in batches a couple of times a day rather than as soon as it comes in.
Thursday, February 26, 2009
But after so many years of brain research showing that most of our everyday cognitions result from a complex but observable interaction of proteins and neurons and other mostly uncontrolled cellular activity, how can so many otherwise rational people think dreams should be taken seriously? After all, brain activity isn't mystical but dough highly predictable.
Human beings are irrational about dreams the same way they are irrational about a lot of things. We make dumb choices all the time on the basis of silly information like racial bias or a misunderstanding of statistics — or dreams.
Dumb choices aren't necessarily bad ones. A final finding from the study: When people have dreams about good things happening to their good friends, they are more likely to say those dreams are meaningful than when they have dreams about bad things happening to their friends. Similarly, we invest more meaning in dreams in which our enemies are punished and less meaning in dreams in which our enemies emerge victorious. In short, our interpretation of dreams may say a lot less about some quixotic search for hidden truth than it does about another enduring human quality: optimistic thinking.
But just because an industry is socially worthy, it doesn't follow that it is commercially viable. Today, besides newspapers, three other media are thrashing over their futures in a networked world, and as with newspapers, the rhetoric is mostly of the nonproductive "But I like it!" and "It's good for society!" variety, with not enough thought given to whether these media are commercially viable in the Internet age.
The imminent collapse of the American newspaper industry has spawned entire gazeteers' worth of high-minded handwringing about the social value of newspapers and the social harm that their disappearance will unleash. It's probably all true.Newspapers are fundamentally an advertising-supported medium. Advertisers place ads in newspapers because they believe these ads will sell more products for them. The price of an ad is set by factors such as: How many people will see the ad? Who is likely to see the ad? Are they the sort of people who are likely to want to buy what the ad is selling? What happened to newspapers is easy to understand: There are more and better ways for an advertiser to deliver ads of similar quality to the "spendiest" newspaper readers, most of them on the Internet.
Big-budget movies require a lot of capital and rely on studios controlling the rate and nature of distribution of the finished product. If you're going to recoup your $300 million box-office turd, you need to move lots of DVDs, TV licenses, and assorted "secondary" revenues.
Let's be realistic here: Nothing anyone does is going to make it harder to get movies when you want them, where you want them, and at whatever price you feel you should pay for them and the harder you crack down on Internet movie-downloading, the more attractive you make buying pirate DVDs -- a virtually zero-risk transaction that directly displaces DVD purchases.
Now, maybe film studios can do what Magnolia Pictures is doing -- distributing day-and-date releases to satellite, pay-per-view, cinema, DVD, and foreign film - and recapture a lot of the money. But if it's not enough, commercially motivated BBMs might simply die. Besides BBMs demands a technological reality that has ceased to exist-- just enough technology to distribute the films, but not so much technology that the audience gets to overrule your distribution decisions.
Making stuff for the Net just doesn't cost as much the audiovisual material we're used to see. It may not be as pretty, but it's very cheap.
The problem was that the record industry was built on per-unit income from CDs (and records and tapes and so on). The economics of this stink. Besides they produce a great number of failures for each success they make.Whatever profitability there is in the system is seriously jeopardized by the music-listening public's ability to get any song they want, at any time, for any price (including free). And, just as with movies, it's never going to get harder to copy music without permission. Now the good news: The more your music gets copied, the more people there are who will pay to see you perform it live. This may not support a record label with offices on five continents, but it can probably put a comparable (if not larger) amount of money into the pockets of a comparable (if not larger) quantity of artists. But as a category, the future's looking good for recorded music and the musicians who make it.
This one's more of a mixed bag. On the one hand, Internet copying of printed matter is impossible to prevent. On the other hand, for many kinds of books -- long-form narratives, for instance -- reading off a screen is a poor substitute for the standard support. The bad news for books is twofold: First, the quantity and variety of titles carried outside of bookstores has radically declined, thanks to the rise of national big-box chain stores, who do all ordering from a centralized database. The other problem is that we're increasingly conditioned to read short blocks of text -- more text than ever, but in radically different form than you generally find between covers. Combine this with the sheer amount of read-for-pleasure text available at one-click's distance on the Net, and even those of us who worship books find ourselves reading fewer of them.Now for the good news: It doesn't cost much to write a novel neither to produce it.
Wednesday, February 25, 2009
Why celebrities are not heroes and often not respected – but still famous ?
Eckhart Tolle talks about fame in his book A New Earth. He puts it into context of individual and collective ego enhancing behavior. Both the fan and the celebrity are enhancing their sense of self. They are both being selfish for their own reasons. The fan uses the famous person to enhance their sense of self by association and/or by gossiping. Ego’s are attracted to bigger ego’s. Likewise the famous person wants people to acknowledge and boost their own ego, verifying to them what they tell themselves is their identity. It is a process of mind identification which brings this about. The ego, the thinking mind, is comparative. The reason the celebrity is famous is because we compare him/her to other people who might be more or less famous, or not at all famous.
The hidden function of the existence of celebrities would be to secure the consent of ordinary people to the unequal distribution of rewards, in a unfair absence of genuine equality of opportunity. Basically, you accept an unfair, arbitrary system if you think it’s nonetheless fair, almost random. Becoming famous is a question of luck – and you just wish you’ll be one of the lucky fews in the limo. In this scenario there should be a constant renewal of living celebrities – by contrast with a system with more perenial stars, or (dead, mythical) legendary figures : it shows that there is actually room for newcomers, and entertains every body’s dream to get his moment of fame.
But then, if being famous is about being noticed and admired, you must known that you won’t be admired for any quality of yours – but just for being lucky, which anyone could have been.
Tuesday, February 24, 2009
1. Scientific integrity has become a live issue in public culture.
2. Academia and industry as scientific work environments have converged in all sorts of ways.
In the 19th and 20th centuries, doing science was typically more of an avocation than a job. In the 17th century, the great chemist R. Boyle not only financed his science out of his own pockets but also shared a common view that doing science as a "trade" was demeaning. Anyone who accepted money to pursue knowledge would compromise their integrity. Newton, as professor of mathematics at Cambridge, was not paid to do physical or mathematical research but to teach. The 19th century's most famous scientist, Charles Darwin, was never paid to do science. Einstein's three great papers of 1905 were not part of his job specifications. True, over the course of history,
many scientific researchers were in academic employment, but with few exceptions, before the 20th century, the job of a science professor was not to produce new knowledge but to transmit and safeguard the existing one.
The transformation of science from a calling to a job happened during the course of the past century. Indeed, science is arguably the world's youngest profession: The routinization of the paid role is less than a hundred years old; the word "scientist," coined in 1840, was not in standard usage until the early 20th century. Actually almost no one agrees with boyle in that "taking
money to do science will compromise it's integrity"..
The "engineers" and the enterprising scientists whose discoveries can be turned to cures,
power, and, of course, profit are become the most prestigious sort of practitioner, the contemporary culture heroes.
The dissolution of boundaries between academia and industry has given enormous strength to modern science: resources to do what scientists want to do, time to do it, and the reputation that comes from aligning science with the concrete goods — better communications, better health and more energy-efficient products. And if the scientists inhabiting such institutions can now make a good living that too augments the value that our sort of society grants to science.
As we enter the 21st century, new institutional configurations for doing science emerge, together with new scientific agendas and new conceptions of what it is to be a scientist. Some participants and observers of the scene celebrate these changes; others are seriously worried about them. We can be sure of only one thing: The identity of the modern scientist is, in every possible sense, a work in progres.
Monday, February 23, 2009
There are two major divisions of mathematics: pure and applied. Pure mathematics investigates the subject solely for its theoretical interest. Applied mathematics develops tools and techniques for solving specific problems of business and engineering or for highly theoretical applications in the sciences.
Mathematics is pervasive throughout modern life. Baking acake or building a house involves the use of numbers, geometry, measures, and space. The design of precision instruments, the development of new technologies, and advanced computers all use more technical mathematics.
Mathematics is the science of structure, order, and relation that has evolved from elemental practices of counting, measuring, and describing the shapes of objects. It deals with logical reasoning and quantitative calculation, and its development has involved an increasing degree of idealization and abstraction of its subject matter.
Mathematics first arose from the practical need to measure time and to count. Thus, the history of mathematics begins with the origins of numbers and recognition of the dimensions and properties of space and time. The earliest continuous records of mathematical activity that have survived in written form are from the 2nd millennium BC. The Egyptian pyramids reveal evidence of a fundamental knowledge of surveying and geometry as early as 2900 BC.
The Greeks were the first to develop a truly mathematical spirit. They were interested not only in the applications of mathematics but in its philosophical significance, which was especially appreciated by Plato. They developed the idea of using mathematical formulas to prove the validity of a proposition. Aristotle, engaged in the theoretical study of logic, the analysis of correct reasoning. No previous mathematics had dealt with abstract entities or the idea of a mathematical proof.
Indian mathematicians were especially skilled in arithmetic, methods of calculation, algebra, and trigonometry. Aryabhata calculated pi to a very accurate value of 3.1416. Because Indian mathematicians were not concerned with such theoretical problems as irrational numbers, they were able to make great strides in algebra. Their decimal place-valued number system, including zero, was especially suited for easy calculation. Indian mathematicians, however, lacked interest in a sense of proof. Most of their results were presented simply as useful techniques. One of the greatest scientific minds of Islam was called al-jabr who became known as algebra. Consequently, the numbers familiar to most people are still referred to as Arabic numerals.
Saturday, February 21, 2009
What I do not agree here is that social networks do necessarily need to diminish face to face relations, they could be an other communication media such as the cellphone, that did not replace physical interaction: on the contrary it generally facilitates it. I suspect that factor that makes people more reticent to meet a friend in person is not related, as plain as this doctor suggests, it must be a much more complex phenomena that is taking place in the actual society.
Probably most people that use mainly internet to relate to others had previous social problems. This communication media allows them to relate to others, in a limited way, and also helps them feel less lonely. It can also help them overcame their social problems by using this communication media to practice and learn their interpersonal skills.
A decade ago, a detailed classic study of 73 families who used the internet for communication, The Internet Paradox, concluded that greater use of the internet was associated with declines in communication between family members in the house, declines in the size of their social circle, and increases in their levels of depression and loneliness. They went on to report “both social disengagement and worsening of mood... and limited face-to-face social interaction... poor quality of life and diminished physical and psychological health” (Kraut et al, 1998).This study was indeed a classic. It was so important that the same research team followed up the same participants several years later and published their results in a study called Internet Paradox Revised.What they found was that the negative effects reported in the first study had disappeared, and that the internet use was associated with better a social life.
Wednesday, February 18, 2009
Its symptomatollogy include dissociative experiences, low self-esteem, obsessive-compulsive disorder, and also a tendency to develop internet addiction. The difficulty in identifying emotions is associated in a significant way with an elevated risk of developing Internet addiction. On the other side an emotionally impoverished world along Internet addiction (or any other gadget or media like addiction). Addiction to technologies and emotional impoverishment are, in my opinion, codependent. If it is true that alexithymia promotes addiction, excessive use of technologies which in its turn leads us to a “second-hand” emotional life and a disconnection from the place where the emotions get activated, recognized and mature.
An incapacity of identifying emotions means a major risk of addiction. This makes me observe that lack of awareness on our own emotions (and our inner life in general) leading us to act mechanically and become servomechanisms of technology.If we do not understand what we feel, don’t listen to ourselves, consequently we’ll not get to know ourselves, and our lives will depend on external stimuli, which will trap us repeatedly. Therefore, our identities will depend on external inputs, since we’ll not have any other identity, apart from the one which is mirrored in the Net.
An alexithymic’s limited introspective life is a condition which is replicated in everybody who lives in a continuous flow of information, paying attention only to the inputs which come from the outside. The internal life and capacities for introspection become more and more impoverished, and it becomes more difficult to transfer attention from the external to the internal. The awareness of our feelings is an embodied process as much as a mental one, while using the Internet limits us to a mental sphere which takes us away from the connection with the body, taking the awareness of feelings even farther away.
Awareness of our own emotions prevents the dissociative experiences typical of alexithymics because it keeps our feet on the ground and anchors us to the body. Since ever, the best way to gain insight to our emotions – and not just that – is meditation; where the flow of information is only witnessed, made aware, and not acted upon. Fundamentally not clicked.
Saturday, February 14, 2009
Our perception of how mentally sharp we are has more to do with how we're feeling emotionally than how our cognitive functions are actually working.
In other words when someone says, 'I think my memory has become much worse recently', research suggests that this tells us almost nothing about how their memory is working, but reliably indicates that their mood has been low.
It's quite amazing to think that we have such poor insight into the functioning of our own minds that we 'mistake' low mood for a bad memory, poor concentration or impaired problem solving.
It seems that our ability to have insight into our own mental functioning is not very trustworthy.
Thursday, February 12, 2009
A few years ago, when I was first learning about memory, the example probably would have gone more like “your short term memory holds small amounts of information, like a phone number, while you rehearse it in your head until you have it memorized”.
The main difference between the examples is that the iPhone has replaced our own biological memory storage as the final resting place of long term memories. I think this points toward a more general trend, in which technology is taking over many of the functions that our brains carried out before. Why memorize a phone number when you can, at any time, just retrieve it on a screen with a few swipes of your finger? Why commit the times table to your memory when a calculator is always close at hand?
Storing memories outside of our brains is nothing new. Scrawling something on paper is much the same. However, the ease with which we can store and retrieve these external memory banks is improving at an exponential rate. Today, a lot of the human race’s collective store of knowledge can be searched in fractions of a second with a few keystrokes in a search engine. Maybe tomorrow, our fingers won’t even be an intermediary step; a direct link between our minds and databases need not be science fiction. Google may become the major influence in how human beings think, behave and learn.
As we continue to improve our access to information outside of our heads, I think there will be less emphasis on teaching people raw information, and more emphasis on teaching what to do with information. Scientific research into topics like human creativity (which computers don’t seem to have mastered yet) and cognitive psychology will become increasingly important, along with other disciplines that deal with how to manipulate information into something usefull.
Wednesday, February 11, 2009
There's no doubt that technological innovations have delivered many of the comforts of modern life. Penicillin, air travel, the Internet, movies on the big screen and much, much more and after all who would want to give any of these up?
Has innovation become hype of the moment? Is it always good for us? Are we falling into the trap of pursuing technological change without considering the consequences?
People are attracted to new technology, and after a lapse of time they get an almost irresistible urge towards its adoption. Cities are then structured around it. So it's a combination of individual choice and commercial pressures where the black arts of advertising and so on say to people this is the only way to go. Sadly the fun image that advertising has traditionally enjoyed is now giving way to a much darker picture of advertising as mental pollution.
Innovation is doing new things or old things in a different way, but what is important is understanding the interdependence between all the different innovations, so the whole results beneficial rather than a dangerous bunch of ideas randomly interacting with each other. The secret in leading technology to where we want it to go is understanding the whole interconnected picture.
Monday, February 9, 2009
The term ‘health’ is a non-exact term used loosely in everyday speech. Equally ‘mental health’, ‘mental illness’ and ‘mental disorder’ are used with an comparable lack of precision and the latter two most often interchangeably. In addition psychiatric health/illness/disorder are used synonymously with mental health/illness/disorder. A further problem with this concept is that there is no clear cut off point between mental disorder and mental health; indeed one person’s mental health aid, might be another’ s mental disorder inducer.
The obsession Americans have for classifying and naming every single human behavior, body dysfunction, and other classifiable entities does not help the process of healing. But the most harmful misconception is that they consider that the classifications or illness do not overlap or interact with each other. This obsolet concept is the main cause of treatment failures.
Sunday, February 8, 2009
I continue to struggle with the notion of philosophy’s usefulness. What is philosophy supposed to do? Why does it matter? Even college philosophy majors are often at a loss when asked these questions. appears to be “precise knowledge about useless stuff”. I am suspicious of philosophy, unsure of its power and my own, even when I naturally feel drawn to the “big” questions.
On the other hand philosophy is taking the time and finding the courage and patience to examine and organize our most basic assumptions and ways of thinking. Organized thought, in turn, makes for coherent and effective communication and action. This philosophy is something we all can do and naturally want to do. Young adults are especially hungry for it.
Philosophical questions are not made for the sake of questioning. These questions are asked because the condition of our lives is intimately connected to them. Plato’s Republic demonstrates how asking questions can lead to learning how to ask better questions, how to evaluate answers, how to improvise answers, how to determine how far questions can go, and how to have fun doing it. Philosophy can lead attentive mind to explore how to think, speak, and live effectively.
Thursday, February 5, 2009
Queremos siempre más. Queremos encontrar lo que se busca una y otra vez. Cuando se falla en la busqueda surgen dificultades para continuar, la tolerancia a la frustracion es minima. El agobio nos supera, implidiendonos continuar con nuestra tarea, impidiendonos avanzar. Impidiendonos conseguir mas.
Pero a veces, sólo a veces, cuando ocurre el encuentro, descubrimos ese tipo de melodía que nos hace sentir mejor en la tristeza. O tal vez por un momento realmente logramos reconocernos o sentirnos con la búsqueda. Pero siempre adelantados, parecemos una amenaza; por la permanente sensación de movimiento y la falta de gracia una veces, de fe, otras. Perseguimos con preguntas para obtener respuestas y esa es la mejor excusa para no dejar de buscar. Las experiencias nos llevan a nuevas preguntas. Siempre. Esto significa que la búsqueda es compleja y el encuentro o descubrimiento más valioso o definitivo. Y comprendemos que el encuentro y la pérdida son las dos partes que conformal el todo.
Tuesday, February 3, 2009
Is it good of bad?
People who feel better after crying tend to share certain commonalities that may make their experiences therapeutic: For example, they are more apt to have been comforted by someone after crying; they’re not crying alone. They are also more likely to see whatever made them cry helping them improve their life quality by understanding their problems. They are also less likely to have been embarrassed or shamed by the experience.
According to scientific research, people who are confused about the sources of their own emotions report little benefit from a burst of tears. The purpose of crying may be to block thinking, to effectively seal off the flood of unanswerable questions that come after any major loss, the better to clarify those that are most important or most practical. If this psychological system is already clunky, a fire shower of tears is not likely to improve it.
If relief, and/or a magical solution to the cause of the grief is the only thing expected from crying, it is clear that nothing will improve. Usually people who suffer a great loss suddenly and unexpected tend to have this kind of ideas. WIn situations where they can not undo the cause of their grief, but you they realize they can achieve a gradual acceptance of the loss while grieving (which may involve crying) and find the path towards the healing process.
Crying is all right in its way while it lasts. But you have to stop sooner or later, and then you still have to decide what to do.