miércoles, 23 de enero de 2008

Los 'borrados' de Eslovenia no presiden la UE

22.01.2008
DANIEL REBOREDO

El Correo


El antiguo Reino de los serbios, croatas y eslovenos nacido del colapso de la monarquía austro-húngara (1918), después Reino de Yugoslavia (1929) y más tarde República Federal Socialista de Yugoslavia, de los que los eslovenos formaron parte, consiguió ser independiente el 25 de junio de 1991 tras un enfrentamiento armado con las fuerzas armadas federales de Yugoslavia en un breve conflicto denominado 'Guerra de los diez días' o 'Guerra de Eslovenia'. El enfrentamiento fue el preludio de la sangrienta desintegración de Yugoslavia que se inició con las traumáticas guerras en Croacia y Bosnia-Herzegovina y que continuó con la independencia pacífica de Macedonia (1992) y Montenegro (2006). A partir del mencionado 25 de junio, nació la República Parlamentaria de Eslovenia (presidida desde noviembre de 2007 por Danilo Türk). La más próspera de las seis repúblicas yugoslavas vio truncarse esta prosperidad con la prolongada guerra en Bosnia-Herzegovina que tanto afectó a su economía, producción industrial, comercio y turismo. La sociedad eslovena experimentó un gran cambio estructural en su seno antes de que la adhesión del país a la UE fomentara la democratización del mismo, la transición a la economía de mercado, la creación de nuevas instituciones y la progresiva europeización. La república ex yugoslava más desarrollada se benefició de una favorable situación de partida que permitió al pequeño país centroeuropeo llevar a cabo el proceso de europeización sin mayores complicaciones (reformas económicas y sociales, políticas macroeconómicas prudentes).

La pérdida de los mercados de la ex Yugoslavia, y de la desaparecida URSS, a principios de los años 90 del Siglo XX, generó despidos masivos, sobre todo de mano de obra no cualificada; quiebra de empresas; disminución de la población activa y aumento del gasto para pensiones; aparición del paro estructural y, poco a poco, aumento del paro de larga duración; bolsas de pobreza, etcétera. La transformación del sector empresarial se inició después de la independencia, transfiriéndose la propiedad social, nacionalizándose las grandes empresas (infraestructuras, energéticas, siderúrgicas, telecomunicaciones...) y nacionalizando sus tres principales entidades bancarias. La estructura de la economía eslovena experimentó también importantes cambios que favorecieron el sector servicios.

Hoy, Eslovenia tiene tasas de pobreza menores que las de la UE; sus prestaciones sociales, cercanas a las de la media europea, superan a las de España, Irlanda, Italia, Grecia y Portugal; su PIB es mayor que el de Grecia y casi iguala al de Portugal; el desempleo es de un 9% y su tasa de lectura de libros es de las más altas de Europa. Por todo ello, la pequeña república balcánica se ha convertido en modelo de los países que se incorporaron al proyecto europeo el 1 de mayo de 2004. Las luces se acompañan siempre de sombras y en este caso se centran en la necesidad de reformar el lento aparato judicial, aumentar las bajas inversiones extranjeras, completar la privatización y reducir la inflación. El temor de esta pequeña nación de dos millones de habitantes a perder su identidad cultural, así como al aumento del desempleo y a las posibles quiebras de empresas al tener que competir con las de los países más potentes de la Unión, es una realidad presente en su cuerpo social.

Desde el 1 de enero de 2008, Eslovenia, uno de los más pequeños Estados de la UE, está al frente de la política comunitaria, después de vincularse a Occidente con su unión a la OTAN (29 de marzo de 2004), a la UE (1 de mayo de 2004), a la zona euro (1 de enero de 2007) y a la zona Schengen (21 de diciembre de 2007). El Gobierno de Liubliana ha asumido la gran responsabilidad de dirigir la UE en el primer semestre de 2008 y la de enfrentarse a diversos problemas que necesitan de la participación comunitaria. Desde el palacio Brdo, residencia de descanso de Josip Broz 'Tito' y último lugar en el que, supuestamente, Slobodan Milosevic, Franjo Tudjman y Milan Kucan intentaron en vano alcanzar un compromiso para frenar la sangrienta guerra balcánica, se orientará la política de la Unión mediante decenas de conferencias a celebrar en su recién inaugurado Palacio de Congresos.

Dicha política tendrá cinco tareas de difícil consenso: la gestión del nuevo tratado institucional que reemplazó a la Constitución y que sólo requiere ratificación parlamentaria (excepto el peligroso referéndum irlandés); el interés por los temas relacionados con la energía y el medio ambiente (mercado energético interno, fiabilidad del abastecimiento y creación de una comunidad energética en la Europa sudoriental); la cuestión demográfica y económica (fomento de las inversiones en investigación, conocimiento e innovación; desarrollo de un ambiente de negocios competitivo; reforma del mercado laboral y respuesta a los desafíos demográficos); la búsqueda de lazos entre los países de los Balcanes occidentales y la Unión y, finalmente, la promoción del diálogo intercultural entre las naciones europeas. Los dos últimos puntos tienen su principal escollo en la resolución de la situación de Kosovo, para la cual el ministro de Exteriores esloveno, Dimitrij Rupel, aboga por el lamentable Plan Ahtisaari.

La flamante y pintoresca Eslovenia descrita hasta estos momentos, con su idílico panorama de la vida en la nueva Europa, sus esfuerzos de mejora, su espíritu comunitario, es la que los eslovenos quieren transmitirnos al resto de los europeos. Pero lo que no quieren que veamos es uno de sus fantasmas recientes, la 'limpieza étnica administrativa de los borrados', o lo que es lo mismo una de esas situaciones tan habituales en los Estados que nacen de la noche a la mañana y en los que, como Eslovenia, la homogeneidad étnica es clara, y que se manifiesta en el repudio, la marginación y la expulsión de los grupos minoritarios. La muerte de Tito y la depredación de algunos países europeos junto con EE UU hicieron desaparecer Yugoslavia. Pero tras la muerte de éste, el país se mantenía sin mayores conflictos y las leyes de la federación preveían que cada ciudadano yugoslavo tuviera también una segunda ciudadanía, la de la república en la que había nacido, además de un tercer documento que certificaba la residencia en una de las seis repúblicas.

En junio de 1991 Eslovenia proclama su independencia de Belgrado, y poco tiempo después, el 26 de febrero de 1992, el nuevo Estado decidió eliminar, mediante un procedimiento secreto y sin informar a los interesados, del registro de residentes a todos los que no habían solicitado la ciudadanía eslovena en los seis meses posteriores a la independencia. Entre los miles de 'borrados', que a menudo descubrirían por casualidad su nueva situación, había sobre todo serbios, croatas, bosnios, macedonios, gitanos, pero también eslovenos nacidos en el extranjero o en Eslovenia con nacionalidad yugoslava y documentos sin regularizar. La principal consecuencia de su eliminación del registro fue que se convirtieron en residentes ilegales en el país, dictándose contra muchos de ellos órdenes de expulsión que les obligaron a abandonarlo, incluso hacia zonas de guerra como Croacia o Bosnia. Al considerarlos extranjeros o apátridas sin derecho de residencia permanente en Eslovenia, los 'borrados' no podían tener legalmente un empleo, perdían las pensiones, carecían de asistencia médica y, en no pocos casos, dejaban de tener derecho a la educación secundaria.

Las presiones de la ONU y de Amnistía Internacional consiguieron que la Corte Constitucional eslovena condenara en dos ocasiones la ilegalidad y la anticonstitucionalidad de la medida, imponiendo al Gobierno en 2003, mediante sentencia, la obligación de recuperar la ciudadanía y los derechos de los afectados, así como de indemnizarlos. Pero los sucesivos ejecutivos de Liubliana no se han dado por aludidos y a día de hoy no sólo no se ha hecho nada, sino que ante las críticas que recibía de la prensa el anterior Gobierno, éste cesó y reemplazó en 2006 al 80% de los editores de la misma, cambiando las leyes para controlarlos. El pasado octubre, unos seiscientos periodistas firmaron una declaración acusando al entonces presidente del Gobierno, Janez Jansa, de imponer la censura y coartar la libertad de expresión. Los 'borrados' no son partícipes de la Eslovenia del presente y no lo son porque personas con nombre y apellidos lo decidieron así. La asignatura pendiente de los 'borrados' y la falta de progresos en el campo de los derechos humanos también son parte de la Eslovenia que preside la Unión este primer semestre de 2008.


http://www.elcorreodigital.com/vizcaya/prensa/20080122/opinion/borrados-eslovenia-presiden-daniel-20080122.html

lunes, 14 de enero de 2008

Blind Faiths

Blind Faiths

By AYAAN HIRSI ALI
The New Yorh times. Published: January 6, 2008


Several authors have published books on radical Islam’s threat to the West since that shocking morning in September six years ago. With “The Suicide of Reason,” Lee Harris joins their ranks. But he distinguishes himself by going further than most of his counterparts: he considers the very worst possibility — the destruction of the West by radical Islam. There is a sense of urgency in his writing, a desire to shake awake the leaders of the West, to confront them with their failure to understand that they are engaged in a war with an adversary who fights by the law of the jungle.

Harris, the author of “Civilization and Its Enemies: The Next Stage of History,” devotes most of his book to identifying and distinguishing between two kinds of fanaticism. The first is Islamic fanaticism, a formidable enemy in the struggle for cultural survival. In Harris’s view, this fanaticism has acted as a “defense mechanism,” shielding Islam from the pressures of the changing world around it and allowing it to expand into territories and cultures where it had previously been unknown.

With few exceptions, Harris sees Islamic expansion as permanent. Although this point is arguable, he bravely attempts to make the case that the entry of Islam into another culture produces changes on every level, from political to personal: “Wherever Islam has spread, there has occurred a total and revolutionary transformation in the culture of those conquered or converted.”

In describing the imperialist nature of Islam, Harris suggests that it is distinct from the Roman, British and French empires. He views Islamic imperialism as a single-minded expansion of the religion itself; the empire that it envisions is governed by Allah. In this sense, the idea of jihad is less about the inner struggle for peace and justice and more about a grand mission of conversion. It should be said, however, that Harris’s argument is incomplete, since he does not address the spread of Christianity in the Roman, British and French empires.

The expansion of Islam is perhaps more potent than the expansion of the Christian empires (including Rome after Constantine) because the concept of separating the sacred from the profane has never been acceptable in Islam the way it has been in Christianity. The Romans, the British and the French went about annexing large parts of the world more for earthly or material gain than for spiritual dominance. Under these empires, the clergy was allowed to propagate its faith as long as it did not jeopardize imperial interests.

Harris goes on to argue that the Muslim world, since it is governed by the law of the jungle, makes group survival paramount. This explains in part the willingness of Muslims to become martyrs for the larger community, the umma — uniting peoples separated by geographical boundaries, with different cultures, heritages and languages. According to Harris, this sense of solidarity is sustainable only with the weapon of fanaticism, which obligates each member of the umma to convert infidels and to threaten those who attempt to leave with death. That is, the aim of Muslim culture, so different from that of the West, is both to preserve and to convert, and this is what enables it to spread across the globe.

The second fanaticism that Harris identifies is one he views as infecting Western societies; he calls it a “fanaticism of reason.” Reason, he says, contains within itself a potential fatality because it blinds Western leaders to the true nature of Islamic-influenced cultures. Westerners see these cultures merely as different versions of the world they know, with dominant values similar to those espoused in their own culture. But this, Harris argues, is a fatal mistake. It implies that the West fails to appreciate both its history and the true nature of its opposition.

Nor, he points out, is the failure linked to a particular political outlook. Liberals and conservatives alike share this misperception. Noam Chomsky and Paul Wolfowitz agreed, Harris writes, “that you couldn’t really blame the terrorists, since they were merely the victims of an evil system — for Chomsky, American imperialism, for Wolfowitz, the corrupt and despotic regimes of the Middle East.” That is to say, while left and right may disagree on the causes and the remedies, they both overlook the fanaticism inherent in Islam itself. Driven by their blind faith in reason, they interpret the problem in a way that is familiar to them, in order to find a solution that fits within their doctrine of reason. The same is true for such prominent intellectuals as Samuel Huntington and Francis Fukuyama.

Harris does not regard Islamic fanaticism as a deviancy or a madness that affects a few Muslims and terrifies many. Instead he argues that fanaticism is the basic principle in Islam. “The Muslims are, from an early age, indoctrinated into a shaming code that demands a fanatical rejection of anything that threatens to subvert the supremacy of Islam,” he writes. During the years that this shaming code is instilled into children, the collective is emphasized above the individual and his freedoms. A good Muslim must forsake all: his property, family, children, even life for the sake of Islam. Boys in particular are taught to be dominating and merciless, which has the effect of creating a society of holy warriors.

By contrast, the West has cultivated an ethos of individualism, reason and tolerance, and an elaborate system in which every actor, from the individual to the nation-state, seeks to resolve conflict through words. The entire system is built on the idea of self-interest. This ethos rejects fanaticism. The alpha male is pacified and groomed to study hard, find a good job and plan prudently for retirement: “While we in America are drugging our alpha boys with Ritalin,” Harris writes, “the Muslims are doing everything in their power to encourage their alpha boys to be tough, aggressive and ruthless.”

(Page 2 of 3)

The West has variously tried to convert, to assimilate and to seduce Muslims into modernity, but, Harris says, none of these approaches have succeeded. Meanwhile, our worship of reason is making us easy prey for a ruthless, unscrupulous and extremely aggressive predator and may be contributing to a slow cultural “suicide.”

Harris’s book is so engaging that it is difficult to put down, and its haunting assessments make it difficult for a reader to sleep at night. He deserves praise for raising serious questions. But his arguments are not entirely sound.

I disagree, for instance, that the way to rescue Western civilization from a path of suicide is to challenge its tradition of reason. Indeed, for all his understanding of the rise of fanaticism in general and its Islamic manifestation in particular, Harris’s use of the term “reason” is faulty.

Enlightenment thinkers, preoccupied with both individual freedom and secular and limited government, argued that human reason is fallible. They understood that reason is more than just rational thought; it is also a process of trial and error, the ability to learn from past mistakes. The Enlightenment cannot be fully appreciated without a strong awareness of just how frail human reason is. That is why concepts like doubt and reflection are central to any form of decision-making based on reason.

Harris is pessimistic in a way that the Enlightenment thinkers were not. He takes a Darwinian view of the struggle between clashing cultures, criticizing the West for an ethos of selfishness, and he follows Hegel in asserting that where the interest of the individual collides with that of the state, it is the state that should prevail. This is why he attributes such strength to Islamic fanaticism. The collectivity of the umma elevates the communal interest above that of the individual believer. Each Muslim is a slave, first of God, then of the caliphate. Although Harris does not condone this extreme subversion of the self, still a note of admiration seems to creep into his descriptions of Islam’s fierce solidarity, its adherence to tradition and the willingness of individual Muslims to sacrifice themselves for the sake of the greater good.

In addition, Harris extols American exceptionalism together with Hegel as if there were no contradiction between the two. But what makes America unique, especially in contrast to Europe, is its resistance to the philosophy of Hegel with its concept of a unifying world spirit. It is the individual that matters most in the United States. And more generally, it is individuals who make cultures and who break them. Social and cultural evolution has always relied on individuals — to reform, persuade, cajole or force. Culture is formed by the collective agreement of individuals. At the same time, it is crucial that we not fall into the trap of assuming that the survival tactics of individuals living in tribal societies — like lying, hypocrisy, secrecy, violence, intimidation, and so forth — are in the interest of the modern individual or his culture.

I was not born in the West. I was raised with the code of Islam, and from birth I was indoctrinated into a tribal mind-set. Yet I have changed, I have adopted the values of the Enlightenment, and as a result I have to live with the rejection of my native clan as well as the Islamic tribe. Why have I done so? Because in a tribal society, life is cruel and terrible. And I am not alone. Muslims have been migrating to the West in droves for decades now. They are in search of a better life. Yet their tribal and cultural constraints have traveled with them. And the multiculturalism and moral relativism that reign in the West have accommodated this.

Harris is correct, I believe, that many Western leaders are terribly confused about the Islamic world. They are woefully uninformed and often unwilling to confront the tribal nature of Islam. The problem, however, is not too much reason but too little. Harris also fails to address the enemies of reason within the West: religion and the Romantic movement. It is out of rejection of religion that the Enlightenment emerged; Romanticism was a revolt against reason.

(Page 3 of 3)

Both the Romantic movement and organized religion have contributed a great deal to the arts and to the spirituality of the Western mind, but they share a hostility to modernity. Moral and cultural relativism (and their popular manifestation, multiculturalism) are the hallmarks of the Romantics. To argue that reason is the mother of the current mess the West is in is to miss the major impact this movement has had, first in the West and perhaps even more profoundly outside the West, particularly in Muslim lands.

Thus, it is not reason that accommodates and encourages the persistent segregation and tribalism of immigrant Muslim populations in the West. It is Romanticism. Multiculturalism and moral relativism promote an idealization of tribal life and have shown themselves to be impervious to empirical criticism. My reasons for reproaching today’s Western leaders are different from Harris’s. I see them squandering a great and vital opportunity to compete with the agents of radical Islam for the minds of Muslims, especially those within their borders. But to do so, they must allow reason to prevail over sentiment.

To argue, as Harris seems to do, that children born and bred in superstitious cultures that value fanaticism and create phalanxes of alpha males are doomed — and will doom others — to an existence governed by the law of the jungle is to ignore the lessons of the West’s own past. There have been periods when the West was less than noble, when it engaged in crusades, inquisitions, witch-burnings and genocides. Many of the Westerners who were born into the law of the jungle, with its alpha males and submissive females, have since become acquainted with the culture of reason and have adopted it. They are even — and this should surely relieve Harris of some of his pessimism — willing to die for it, perhaps with the same fanaticism as the jihadists willing to die for their tribe. In short, while this conflict is undeniably a deadly struggle between cultures, it is individuals who will determine the outcome.


Ayaan Hirsi Ali, a resident fellow at the American Enterprise Institute in Washington, is the author of “Infidel.”

Lo inaceptable

14/01/2008
FERNANDO SAVATER

El País


Hace un cuarto de siglo, entre los casos prácticos que abundaban en los manuales de ética aplicada -sobre todo anglosajones- nunca faltaba el del terrorista que ha puesto una bomba en alguno de los treinta colegios de la ciudad, para que estalle dentro de un cuarto de hora. ¿Debe la policía torturarle para que confiese cuál es el colegio amenazado y así poder salvar a los niños? Siempre contesté que yo, puesto en tal brete, probablemente destriparía al criminal con mis propias manos para sacarle la verdad (y luego, ya metido en faena, al inquisidor que me planteaba la cuestión de marras). Pero eso sí, acto seguido me presentaría al juez e iría muy orgulloso a la cárcel para cumplir la condena que merecía. Lo que de ningún modo estaba dispuesto a admitir es que la ley que castiga la tortura como un delito grave fuese abolida o matizada con un "según las circunstancias", ya que entonces siempre podrían encontrarse justificaciones para torturar. Y nunca, nunca, nunca la tortura puede ser justificable o legal.

Bajo ninguna circunstancia la tortura puede ser justificable o legal

No puede aceptarse que sea asunto de estrategia acusar de torturas o cometerlas


Lo malo es que esa práctica abominable -como otras simi-lares- no sólo es cruel o repelente sino con frecuencia sumamente útil... al menos a corto plazo. Y claro, cuando la utilidad anda por medio, la moral -¡pobrecilla!- se las ve y se las desea para seguir haciéndose oír. Para repudiar la pena de muerte, algunos no encuentran argumento mejor que su inutilidad disuasoria frente a los asesinatos. ¡Cualquiera se atrevía con ella, si fuese realmente eficaz en la erradicación del delito! O fíjense si no en la mayoría de las condenas de la violencia terrorista: se dice que es "ciega", que "no sirve de nada" o que "no ayuda en modo alguno a la liberación del pueblo vasco (o del que sea)". Parece darse a entender que si obtuviera rendimientos ya no sería tan fácil recusarla. Por eso hay interés en presentar a los terroristas como meros locos asesinos, un poco al modo del tipo de la sierra mecánica en La matanza de Texas. Así les resulta más fácil repudiar su comportamiento a las personas sensatas que nunca harían nada semejante... al menos de modo gratuito. A veces esta actitud desemboca en bien-intencionados malentendidos: los socialistas vascos han conseguido que se suprimiera del plan de educación para la paz de la CAV la mención a los "motivos políticos" del terrorismo, pues para ellos tal violencia es "terrorismo y punto". Como si la alusión a una motivación política de ETA (a todas luces evidente y que hace sus crímenes más graves en una democracia) pudiera excusarla un poco al menos por la vía instrumental...

Cuando no es mero desahogo de instintos brutales o sádicos, la tortura puede también tener logros estimables: quizá salve algunas vidas de inocentes, descubra conspiraciones o permita la condena de asesinos especialmente empedernidos. Muy bien, ¿y qué? ¿Ofende por ello menos a quien valora la dignidad humana y también la decencia básica que debe servir de peana moral para la sociedad democrática? ¿Lo que se consigue a corto plazo vale acaso más que lo perdido para siempre? Quienes deseen saber el resultado de excusar ciertas prácticas en nombre de altos motivos no tienen más que leer la espléndida novela Vida ydestino de Vasili Grossman, hoy insospechada y felizmente de moda en nuestro país.

Naturalmente, escribo a rebufo de la alarma producida por las lesiones del etarra Igor Portu, recientemente detenido. Que es preciso respetar la presunción de inocencia de la Guardia Civil, a algunos no hay que recordárnoslo. Tenemos presente la época en que fueron de los pocos que se interponían entre la mafia etarra y la sociedad vasca acochinada por la amenaza. El PNV recogía las nueces del árbol estremecido (aún se alimenta de ellas), los izquierdistas del país luchaban contra el capital y miraban con simpatía a los abertzales por su potencial antisistema, los demás se dedicaban a sus negocios compadeciéndonos de vez en cuando: sólo la Guardia Civil y muy pocos más nos defendieron. Cuando tan fácil era abstenerse o fallar, cuando tantos fallamos, ellos cumplieron su deber. Y siguen en la brecha, de modo que la deuda que algunos sentimos como cosa propia es cada vez mayor. Merecen la presunción de inocencia de cualquier ciudadano pero con suplemento de lujo, sin duda. Tampoco es cosa de incurrir en angelismos y suponer que a los etarras se les puede detener por medio del diálogo, como parece creer el inefable consejero Azkarraga que se queja de que hayan sido "detenidos por la fuerza": por lo visto pretende que se les envíe una citación para que se personen en comisaría lo antes posible, con sus armas y explosivos, a fin de levantar el correspondiente atestado.

Sin embargo, con todo respeto y sin olvidar estas consideraciones, cuando hay sospe-chas fundadas de malos tratos -como en este caso, por informes médicos, testimonios contradictorios, etcétera- no hay más remedio que investigar a fondo y sin subterfugios. La tortura, que ha existido de modo fehaciente en el País Vasco, no es hoy ni mucho menos una práctica generalizada o habitual pero no es imposible que exista en ocasiones puntuales. Y es tan inaceptable como siempre lo ha sido. Ya sabemos que en los manuales de ETA se recomienda declarar siempre haber sido torturados a sus militantes detenidos, lo cual permite suponer que habrá muchas denuncias falsas de este tipo. Pero eso no quiere decir que todas lo sean y resulta inquietante que no haya prácticamente jamás casos descubiertos y responsables castigados, al menos en la última década. A mí, desde luego, esta situación no me deja tranquilo: por un lado, unos dicen que les torturan a todos y siempre; por otra parte, los otros aseguran que no se tortura nunca a nadie. Cada cual cree a los suyos y todos tranquilos. ¡Viva la buena conciencia... sectaria!

Por este camino se ha llegado a una atroz trivialización de la tortura, que para unos es otra bandera contra el Estado y para los demás un fantasma irreal o, aún peor, algo secretamente excusable. A mi entender, tomarse en serio la lucha contra esta práctica supondrá investigar con el máximo rigor cada denuncia: si se revela falsa, debe castigarse penalmente a los denunciantes calumniosos y si tiene base hay que depurar con todo rigor las responsabilidades de los funcionarios culpables, por el bien del cuerpo al que pertenecen y del resto de la sociedad. Todo menos pasar la cosa por alto y dar carpetazo al presunto delito. No puede aceptarse que sea mero asunto de estrategia acusar de torturas o cometerlas.

Sí, ya sabemos que los cómplices de ETA aprovechan estas ocasiones para su siniestra propaganda a favor de los designios criminales de la banda. A ellos no les interesa erradicar la tortura sino favorecer a los suyos. Hace más de un cuarto de siglo publiqué en este mismo diario un artículo titulado Los rentistas de la tortura, que hoy con pocas modificaciones podría venir al caso. En él distinguía a quienes denuncian los malos tratos por afinidad ideológica con quienes los padecen y quienes los rechazamos por lo que son y significan, pero sin ninguna simpatía política por tales pacientes. Ejemplificaba esta actitud, cosas de la época, diciendo que si los torturados fuesen Tejero o Miláns del Bosch nuestra indignación no sería menor. Al día siguiente me encontré en la puerta de mi despacho de la Facultad de Zorroaga, clavado con chinchetas, el artículo citado, con subrayados en rojo y con acotaciones de "¡fascista!". Pues ya ven, sigo impenitente, ante esos críticos y ante los que me advierten de que "no hay que hacerle el juego a ETA". Me recordaba a mí mismo entonces, devoción de mi perdida juventud, los versos de Kipling en su tan sobado poema If: "Si puedes soportar que tu frase sincera sea trampa de necios en boca de malvados...". Aun así no callemos: no debemos callar.

Fernando Savater es catedrático de Filosofía de la Universidad Complutense de Madrid.

http://www.elpais.com/articulo/opinion/inaceptable/elpepuopi/20080114elpepiopi_4/Tes

The Moral Instinct

The Moral Instinct

By STEVEN PINKER
The New York Times. Published: January 13, 2008


Which of the following people would you say is the most admirable: Mother Teresa, Bill Gates or Norman Borlaug? And which do you think is the least admirable? For most people, it’s an easy question. Mother Teresa, famous for ministering to the poor in Calcutta, has been beatified by the Vatican, awarded the Nobel Peace Prize and ranked in an American poll as the most admired person of the 20th century. Bill Gates, infamous for giving us the Microsoft dancing paper clip and the blue screen of death, has been decapitated in effigy in “I Hate Gates” Web sites and hit with a pie in the face. As for Norman Borlaug . . . who the heck is Norman Borlaug?

Yet a deeper look might lead you to rethink your answers. Borlaug, father of the “Green Revolution” that used agricultural science to reduce world hunger, has been credited with saving a billion lives, more than anyone else in history. Gates, in deciding what to do with his fortune, crunched the numbers and determined that he could alleviate the most misery by fighting everyday scourges in the developing world like malaria, diarrhea and parasites. Mother Teresa, for her part, extolled the virtue of suffering and ran her well-financed missions accordingly: their sick patrons were offered plenty of prayer but harsh conditions, few analgesics and dangerously primitive medical care.

It’s not hard to see why the moral reputations of this trio should be so out of line with the good they have done. Mother Teresa was the very embodiment of saintliness: white-clad, sad-eyed, ascetic and often photographed with the wretched of the earth. Gates is a nerd’s nerd and the world’s richest man, as likely to enter heaven as the proverbial camel squeezing through the needle’s eye. And Borlaug, now 93, is an agronomist who has spent his life in labs and nonprofits, seldom walking onto the media stage, and hence into our consciousness, at all.

I doubt these examples will persuade anyone to favor Bill Gates over Mother Teresa for sainthood. But they show that our heads can be turned by an aura of sanctity, distracting us from a more objective reckoning of the actions that make people suffer or flourish. It seems we may all be vulnerable to moral illusions the ethical equivalent of the bending lines that trick the eye on cereal boxes and in psychology textbooks. Illusions are a favorite tool of perception scientists for exposing the workings of the five senses, and of philosophers for shaking people out of the naïve belief that our minds give us a transparent window onto the world (since if our eyes can be fooled by an illusion, why should we trust them at other times?). Today, a new field is using illusions to unmask a sixth sense, the moral sense. Moral intuitions are being drawn out of people in the lab, on Web sites and in brain scanners, and are being explained with tools from game theory, neuroscience and evolutionary biology.

“Two things fill the mind with ever new and increasing admiration and awe, the oftener and more steadily we reflect on them,” wrote Immanuel Kant, “the starry heavens above and the moral law within.” These days, the moral law within is being viewed with increasing awe, if not always admiration. The human moral sense turns out to be an organ of considerable complexity, with quirks that reflect its evolutionary history and its neurobiological foundations.

These quirks are bound to have implications for the human predicament. Morality is not just any old topic in psychology but close to our conception of the meaning of life. Moral goodness is what gives each of us the sense that we are worthy human beings. We seek it in our friends and mates, nurture it in our children, advance it in our politics and justify it with our religions. A disrespect for morality is blamed for everyday sins and history’s worst atrocities. To carry this weight, the concept of morality would have to be bigger than any of us and outside all of us.

So dissecting moral intuitions is no small matter. If morality is a mere trick of the brain, some may fear, our very grounds for being moral could be eroded. Yet as we shall see, the science of the moral sense can instead be seen as a way to strengthen those grounds, by clarifying what morality is and how it should steer our actions.

The Moralization Switch

The starting point for appreciating that there is a distinctive part of our psychology for morality is seeing how moral judgments differ from other kinds of opinions we have on how people ought to behave. Moralization is a psychological state that can be turned on and off like a switch, and when it is on, a distinctive mind-set commandeers our thinking. This is the mind-set that makes us deem actions immoral (“killing is wrong”), rather than merely disagreeable (“I hate brussels sprouts”), unfashionable (“bell-bottoms are out”) or imprudent (“don’t scratch mosquito bites”).

The first hallmark of moralization is that the rules it invokes are felt to be universal. Prohibitions of rape and murder, for example, are felt not to be matters of local custom but to be universally and objectively warranted. One can easily say, “I don’t like brussels sprouts, but I don’t care if you eat them,” but no one would say, “I don’t like killing, but I don’t care if you murder someone.”

The other hallmark is that people feel that those who commit immoral acts deserve to be punished. Not only is it allowable to inflict pain on a person who has broken a moral rule; it is wrong not to, to “let them get away with it.” People are thus untroubled in inviting divine retribution or the power of the state to harm other people they deem immoral. Bertrand Russell wrote, “The infliction of cruelty with a good conscience is a delight to moralists — that is why they invented hell.”


(Page 2 of 8)

We all know what it feels like when the moralization switch flips inside us — the righteous glow, the burning dudgeon, the drive to recruit others to the cause. The psychologist Paul Rozin has studied the toggle switch by comparing two kinds of people who engage in the same behavior but with different switch settings. Health vegetarians avoid meat for practical reasons, like lowering cholesterol and avoiding toxins. Moral vegetarians avoid meat for ethical reasons: to avoid complicity in the suffering of animals. By investigating their feelings about meat-eating, Rozin showed that the moral motive sets off a cascade of opinions. Moral vegetarians are more likely to treat meat as a contaminant — they refuse, for example, to eat a bowl of soup into which a drop of beef broth has fallen. They are more likely to think that other people ought to be vegetarians, and are more likely to imbue their dietary habits with other virtues, like believing that meat avoidance makes people less aggressive and bestial.

Much of our recent social history, including the culture wars between liberals and conservatives, consists of the moralization or amoralization of particular kinds of behavior. Even when people agree that an outcome is desirable, they may disagree on whether it should be treated as a matter of preference and prudence or as a matter of sin and virtue. Rozin notes, for example, that smoking has lately been moralized. Until recently, it was understood that some people didn’t enjoy smoking or avoided it because it was hazardous to their health. But with the discovery of the harmful effects of secondhand smoke, smoking is now treated as immoral. Smokers are ostracized; images of people smoking are censored; and entities touched by smoke are felt to be contaminated (so hotels have not only nonsmoking rooms but nonsmoking floors). The desire for retribution has been visited on tobacco companies, who have been slapped with staggering “punitive damages.”

At the same time, many behaviors have been amoralized, switched from moral failings to lifestyle choices. They include divorce, illegitimacy, being a working mother, marijuana use and homosexuality. Many afflictions have been reassigned from payback for bad choices to unlucky misfortunes. There used to be people called “bums” and “tramps”; today they are “homeless.” Drug addiction is a “disease”; syphilis was rebranded from the price of wanton behavior to a “sexually transmitted disease” and more recently a “sexually transmitted infection.”

This wave of amoralization has led the cultural right to lament that morality itself is under assault, as we see in the group that anointed itself the Moral Majority. In fact there seems to be a Law of Conservation of Moralization, so that as old behaviors are taken out of the moralized column, new ones are added to it. Dozens of things that past generations treated as practical matters are now ethical battlegrounds, including disposable diapers, I.Q. tests, poultry farms, Barbie dolls and research on breast cancer. Food alone has become a minefield, with critics sermonizing about the size of sodas, the chemistry of fat, the freedom of chickens, the price of coffee beans, the species of fish and now the distance the food has traveled from farm to plate.

Many of these moralizations, like the assault on smoking, may be understood as practical tactics to reduce some recently identified harm. But whether an activity flips our mental switches to the “moral” setting isn’t just a matter of how much harm it does. We don’t show contempt to the man who fails to change the batteries in his smoke alarms or takes his family on a driving vacation, both of which multiply the risk they will die in an accident. Driving a gas-guzzling Hummer is reprehensible, but driving a gas-guzzling old Volvo is not; eating a Big Mac is unconscionable, but not imported cheese or crème brûlée. The reason for these double standards is obvious: people tend to align their moralization with their own lifestyles.

Reasoning and Rationalizing

It’s not just the content of our moral judgments that is often questionable, but the way we arrive at them. We like to think that when we have a conviction, there are good reasons that drove us to adopt it. That is why an older approach to moral psychology, led by Jean Piaget and Lawrence Kohlberg, tried to document the lines of reasoning that guided people to moral conclusions. But consider these situations, originally devised by the psychologist Jonathan Haidt:

Julie is traveling in France on summer vacation from college with her brother Mark. One night they decide that it would be interesting and fun if they tried making love. Julie was already taking birth-control pills, but Mark uses a condom, too, just to be safe. They both enjoy the sex but decide not to do it again. They keep the night as a special secret, which makes them feel closer to each other. What do you think about that — was it O.K. for them to make love?

A woman is cleaning out her closet and she finds her old American flag. She doesn’t want the flag anymore, so she cuts it up into pieces and uses the rags to clean her bathroom.

A family’s dog is killed by a car in front of their house. They heard that dog meat was delicious, so they cut up the dog’s body and cook it and eat it for dinner.

Most people immediately declare that these acts are wrong and then grope to justify why they are wrong. It’s not so easy. In the case of Julie and Mark, people raise the possibility of children with birth defects, but they are reminded that the couple were diligent about contraception. They suggest that the siblings will be emotionally hurt, but the story makes it clear that they weren’t. They submit that the act would offend the community, but then recall that it was kept a secret. Eventually many people admit, “I don’t know, I can’t explain it, I just know it’s wrong.” People don’t generally engage in moral reasoning, Haidt argues, but moral rationalization: they begin with the conclusion, coughed up by an unconscious emotion, and then work backward to a plausible justification.

The gap between people’s convictions and their justifications is also on display in the favorite new sandbox for moral psychologists, a thought experiment devised by the philosophers Philippa Foot and Judith Jarvis Thomson called the Trolley Problem. On your morning walk, you see a trolley car hurtling down the track, the conductor slumped over the controls. In the path of the trolley are five men working on the track, oblivious to the danger. You are standing at a fork in the track and can pull a lever that will divert the trolley onto a spur, saving the five men. Unfortunately, the trolley would then run over a single worker who is laboring on the spur. Is it permissible to throw the switch, killing one man to save five? Almost everyone says “yes.”

Consider now a different scene. You are on a bridge overlooking the tracks and have spotted the runaway trolley bearing down on the five workers. Now the only way to stop the trolley is to throw a heavy object in its path. And the only heavy object within reach is a fat man standing next to you. Should you throw the man off the bridge? Both dilemmas present you with the option of sacrificing one life to save five, and so, by the utilitarian standard of what would result in the greatest good for the greatest number, the two dilemmas are morally equivalent. But most people don’t see it that way: though they would pull the switch in the first dilemma, they would not heave the fat man in the second. When pressed for a reason, they can’t come up with anything coherent, though moral philosophers haven’t had an easy time coming up with a relevant difference, either.


(Page 3 of 8)

When psychologists say “most people” they usually mean “most of the two dozen sophomores who filled out a questionnaire for beer money.” But in this case it means most of the 200,000 people from a hundred countries who shared their intuitions on a Web-based experiment conducted by the psychologists Fiery Cushman and Liane Young and the biologist Marc Hauser. A difference between the acceptability of switch-pulling and man-heaving, and an inability to justify the choice, was found in respondents from Europe, Asia and North and South America; among men and women, blacks and whites, teenagers and octogenarians, Hindus, Muslims, Buddhists, Christians, Jews and atheists; people with elementary-school educations and people with Ph.D.’s.

Joshua Greene, a philosopher and cognitive neuroscientist, suggests that evolution equipped people with a revulsion to manhandling an innocent person. This instinct, he suggests, tends to overwhelm any utilitarian calculus that would tot up the lives saved and lost. The impulse against roughing up a fellow human would explain other examples in which people abjure killing one to save many, like euthanizing a hospital patient to harvest his organs and save five dying patients in need of transplants, or throwing someone out of a crowded lifeboat to keep it afloat.

By itself this would be no more than a plausible story, but Greene teamed up with the cognitive neuroscientist Jonathan Cohen and several Princeton colleagues to peer into people’s brains using functional M.R.I. They sought to find signs of a conflict between brain areas associated with emotion (the ones that recoil from harming someone) and areas dedicated to rational analysis (the ones that calculate lives lost and saved).

When people pondered the dilemmas that required killing someone with their bare hands, several networks in their brains lighted up. One, which included the medial (inward-facing) parts of the frontal lobes, has been implicated in emotions about other people. A second, the dorsolateral (upper and outer-facing) surface of the frontal lobes, has been implicated in ongoing mental computation (including nonmoral reasoning, like deciding whether to get somewhere by plane or train). And a third region, the anterior cingulate cortex (an evolutionarily ancient strip lying at the base of the inner surface of each cerebral hemisphere), registers a conflict between an urge coming from one part of the brain and an advisory coming from another.

But when the people were pondering a hands-off dilemma, like switching the trolley onto the spur with the single worker, the brain reacted differently: only the area involved in rational calculation stood out. Other studies have shown that neurological patients who have blunted emotions because of damage to the frontal lobes become utilitarians: they think it makes perfect sense to throw the fat man off the bridge. Together, the findings corroborate Greene’s theory that our nonutilitarian intuitions come from the victory of an emotional impulse over a cost-benefit analysis.

A Universal Morality?

The findings of trolleyology — complex, instinctive and worldwide moral intuitions — led Hauser and John Mikhail (a legal scholar) to revive an analogy from the philosopher John Rawls between the moral sense and language. According to Noam Chomsky, we are born with a “universal grammar” that forces us to analyze speech in terms of its grammatical structure, with no conscious awareness of the rules in play. By analogy, we are born with a universal moral grammar that forces us to analyze human action in terms of its moral structure, with just as little awareness.

The idea that the moral sense is an innate part of human nature is not far-fetched. A list of human universals collected by the anthropologist Donald E. Brown includes many moral concepts and emotions, including a distinction between right and wrong; empathy; fairness; admiration of generosity; rights and obligations; proscription of murder, rape and other forms of violence; redress of wrongs; sanctions for wrongs against the community; shame; and taboos.

The stirrings of morality emerge early in childhood. Toddlers spontaneously offer toys and help to others and try to comfort people they see in distress. And according to the psychologists Elliot Turiel and Judith Smetana, preschoolers have an inkling of the difference between societal conventions and moral principles. Four-year-olds say that it is not O.K. to wear pajamas to school (a convention) and also not O.K. to hit a little girl for no reason (a moral principle). But when asked whether these actions would be O.K. if the teacher allowed them, most of the children said that wearing pajamas would now be fine but that hitting a little girl would still not be.


(Page 4 of 8)

Though no one has identified genes for morality, there is circumstantial evidence they exist. The character traits called “conscientiousness” and “agreeableness” are far more correlated in identical twins separated at birth (who share their genes but not their environment) than in adoptive siblings raised together (who share their environment but not their genes). People given diagnoses of “antisocial personality disorder” or “psychopathy” show signs of morality blindness from the time they are children. They bully younger children, torture animals, habitually lie and seem incapable of empathy or remorse, often despite normal family backgrounds. Some of these children grow up into the monsters who bilk elderly people out of their savings, rape a succession of women or shoot convenience-store clerks lying on the floor during a robbery.

Though psychopathy probably comes from a genetic predisposition, a milder version can be caused by damage to frontal regions of the brain (including the areas that inhibit intact people from throwing the hypothetical fat man off the bridge). The neuroscientists Hanna and Antonio Damasio and their colleagues found that some children who sustain severe injuries to their frontal lobes can grow up into callous and irresponsible adults, despite normal intelligence. They lie, steal, ignore punishment, endanger their own children and can’t think through even the simplest moral dilemmas, like what two people should do if they disagreed on which TV channel to watch or whether a man ought to steal a drug to save his dying wife.

The moral sense, then, may be rooted in the design of the normal human brain. Yet for all the awe that may fill our minds when we reflect on an innate moral law within, the idea is at best incomplete. Consider this moral dilemma: A runaway trolley is about to kill a schoolteacher. You can divert the trolley onto a sidetrack, but the trolley would trip a switch sending a signal to a class of 6-year-olds, giving them permission to name a teddy bear Muhammad. Is it permissible to pull the lever?

This is no joke. Last month a British woman teaching in a private school in Sudan allowed her class to name a teddy bear after the most popular boy in the class, who bore the name of the founder of Islam. She was jailed for blasphemy and threatened with a public flogging, while a mob outside the prison demanded her death. To the protesters, the woman’s life clearly had less value than maximizing the dignity of their religion, and their judgment on whether it is right to divert the hypothetical trolley would have differed from ours. Whatever grammar guides people’s moral judgments can’t be all that universal. Anyone who stayed awake through Anthropology 101 can offer many other examples.

Of course, languages vary, too. In Chomsky’s theory, languages conform to an abstract blueprint, like having phrases built out of verbs and objects, while the details vary, like whether the verb or the object comes first. Could we be wired with an abstract spec sheet that embraces all the strange ideas that people in different cultures moralize?

The Varieties of Moral Experience

When anthropologists like Richard Shweder and Alan Fiske survey moral concerns across the globe, they find that a few themes keep popping up from amid the diversity. People everywhere, at least in some circumstances and with certain other folks in mind, think it’s bad to harm others and good to help them. They have a sense of fairness: that one should reciprocate favors, reward benefactors and punish cheaters. They value loyalty to a group, sharing and solidarity among its members and conformity to its norms. They believe that it is right to defer to legitimate authorities and to respect people with high status. And they exalt purity, cleanliness and sanctity while loathing defilement, contamination and carnality.

The exact number of themes depends on whether you’re a lumper or a splitter, but Haidt counts five — harm, fairness, community (or group loyalty), authority and purity — and suggests that they are the primary colors of our moral sense. Not only do they keep reappearing in cross-cultural surveys, but each one tugs on the moral intuitions of people in our own culture. Haidt asks us to consider how much money someone would have to pay us to do hypothetical acts like the following:

Stick a pin into your palm.

Stick a pin into the palm of a child you don’t know. (Harm.)

Accept a wide-screen TV from a friend who received it at no charge because of a computer error.

Accept a wide-screen TV from a friend who received it from a thief who had stolen it from a wealthy family. (Fairness.)

Say something bad about your nation (which you don’t believe) on a talk-radio show in your nation.

Say something bad about your nation (which you don’t believe) on a talk-radio show in a foreign nation. (Community.)

Slap a friend in the face, with his permission, as part of a comedy skit.

Slap your minister in the face, with his permission, as part of a comedy skit. (Authority.)

Attend a performance-art piece in which the actors act like idiots for 30 minutes, including flubbing simple problems and falling down on stage.

Attend a performance-art piece in which the actors act like animals for 30 minutes, including crawling around naked and urinating on stage. (Purity.)

In each pair, the second action feels far more repugnant. Most of the moral illusions we have visited come from an unwarranted intrusion of one of the moral spheres into our judgments. A violation of community led people to frown on using an old flag to clean a bathroom. Violations of purity repelled the people who judged the morality of consensual incest and prevented the moral vegetarians and nonsmokers from tolerating the slightest trace of a vile contaminant. At the other end of the scale, displays of extreme purity lead people to venerate religious leaders who dress in white and affect an aura of chastity and asceticism.

The Genealogy of Morals

(Page 5 of 8)

The five spheres are good candidates for a periodic table of the moral sense not only because they are ubiquitous but also because they appear to have deep evolutionary roots. The impulse to avoid harm, which gives trolley ponderers the willies when they consider throwing a man off a bridge, can also be found in rhesus monkeys, who go hungry rather than pull a chain that delivers food to them and a shock to another monkey. Respect for authority is clearly related to the pecking orders of dominance and appeasement that are widespread in the animal kingdom. The purity-defilement contrast taps the emotion of disgust that is triggered by potential disease vectors like bodily effluvia, decaying flesh and unconventional forms of meat, and by risky sexual practices like incest.

The other two moralized spheres match up with the classic examples of how altruism can evolve that were worked out by sociobiologists in the 1960s and 1970s and made famous by Richard Dawkins in his book “The Selfish Gene.” Fairness is very close to what scientists call reciprocal altruism, where a willingness to be nice to others can evolve as long as the favor helps the recipient more than it costs the giver and the recipient returns the favor when fortunes reverse. The analysis makes it sound as if reciprocal altruism comes out of a robotlike calculation, but in fact Robert Trivers, the biologist who devised the theory, argued that it is implemented in the brain as a suite of moral emotions. Sympathy prompts a person to offer the first favor, particularly to someone in need for whom it would go the furthest. Anger protects a person against cheaters who accept a favor without reciprocating, by impelling him to punish the ingrate or sever the relationship. Gratitude impels a beneficiary to reward those who helped him in the past. Guilt prompts a cheater in danger of being found out to repair the relationship by redressing the misdeed and advertising that he will behave better in the future (consistent with Mencken’s definition of conscience as “the inner voice which warns us that someone might be looking”). Many experiments on who helps whom, who likes whom, who punishes whom and who feels guilty about what have confirmed these predictions.

Community, the very different emotion that prompts people to share and sacrifice without an expectation of payback, may be rooted in nepotistic altruism, the empathy and solidarity we feel toward our relatives (and which evolved because any gene that pushed an organism to aid a relative would have helped copies of itself sitting inside that relative). In humans, of course, communal feelings can be lavished on nonrelatives as well. Sometimes it pays people (in an evolutionary sense) to love their companions because their interests are yoked, like spouses with common children, in-laws with common relatives, friends with common tastes or allies with common enemies. And sometimes it doesn’t pay them at all, but their kinship-detectors have been tricked into treating their groupmates as if they were relatives by tactics like kinship metaphors (blood brothers, fraternities, the fatherland), origin myths, communal meals and other bonding rituals.

Juggling the Spheres

All this brings us to a theory of how the moral sense can be universal and variable at the same time. The five moral spheres are universal, a legacy of evolution. But how they are ranked in importance, and which is brought in to moralize which area of social life — sex, government, commerce, religion, diet and so on — depends on the culture. Many of the flabbergasting practices in faraway places become more intelligible when you recognize that the same moralizing impulse that Western elites channel toward violations of harm and fairness (our moral obsessions) is channeled elsewhere to violations in the other spheres. Think of the Japanese fear of nonconformity (community), the holy ablutions and dietary restrictions of Hindus and Orthodox Jews (purity), the outrage at insulting the Prophet among Muslims (authority). In the West, we believe that in business and government, fairness should trump community and try to root out nepotism and cronyism. In other parts of the world this is incomprehensible — what heartless creep would favor a perfect stranger over his own brother?

The ranking and placement of moral spheres also divides the cultures of liberals and conservatives in the United States. Many bones of contention, like homosexuality, atheism and one-parent families from the right, or racial imbalances, sweatshops and executive pay from the left, reflect different weightings of the spheres. In a large Web survey, Haidt found that liberals put a lopsided moral weight on harm and fairness while playing down group loyalty, authority and purity. Conservatives instead place a moderately high weight on all five. It’s not surprising that each side thinks it is driven by lofty ethical values and that the other side is base and unprincipled.

Reassigning an activity to a different sphere, or taking it out of the moral spheres altogether, isn’t easy. People think that a behavior belongs in its sphere as a matter of sacred necessity and that the very act of questioning an assignment is a moral outrage. The psychologist Philip Tetlock has shown that the mentality of taboo — a conviction that some thoughts are sinful to think — is not just a superstition of Polynesians but a mind-set that can easily be triggered in college-educated Americans. Just ask them to think about applying the sphere of reciprocity to relationships customarily governed by community or authority. When Tetlock asked subjects for their opinions on whether adoption agencies should place children with the couples willing to pay the most, whether people should have the right to sell their organs and whether they should be able to buy their way out of jury duty, the subjects not only disagreed but felt personally insulted and were outraged that anyone would raise the question.

The institutions of modernity often question and experiment with the way activities are assigned to moral spheres. Market economies tend to put everything up for sale. Science amoralizes the world by seeking to understand phenomena rather than pass judgment on them. Secular philosophy is in the business of scrutinizing all beliefs, including those entrenched by authority and tradition. It’s not surprising that these institutions are often seen to be morally corrosive.

Is Nothing Sacred?


(Page 6 of 8)

And “morally corrosive” is exactly the term that some critics would apply to the new science of the moral sense. The attempt to dissect our moral intuitions can look like an attempt to debunk them. Evolutionary psychologists seem to want to unmask our noblest motives as ultimately self-interested — to show that our love for children, compassion for the unfortunate and sense of justice are just tactics in a Darwinian struggle to perpetuate our genes. The explanation of how different cultures appeal to different spheres could lead to a spineless relativism, in which we would never have grounds to criticize the practice of another culture, no matter how barbaric, because “we have our kind of morality and they have theirs.” And the whole enterprise seems to be dragging us to an amoral nihilism, in which morality itself would be demoted from a transcendent principle to a figment of our neural circuitry.

In reality, none of these fears are warranted, and it’s important to see why not. The first misunderstanding involves the logic of evolutionary explanations. Evolutionary biologists sometimes anthropomorphize DNA for the same reason that science teachers find it useful to have their students imagine the world from the viewpoint of a molecule or a beam of light. One shortcut to understanding the theory of selection without working through the math is to imagine that the genes are little agents that try to make copies of themselves.

Unfortunately, the meme of the selfish gene escaped from popular biology books and mutated into the idea that organisms (including people) are ruthlessly self-serving. And this doesn’t follow. Genes are not a reservoir of our dark unconscious wishes. “Selfish” genes are perfectly compatible with selfless organisms, because a gene’s metaphorical goal of selfishly replicating itself can be implemented by wiring up the brain of the organism to do unselfish things, like being nice to relatives or doing good deeds for needy strangers. When a mother stays up all night comforting a sick child, the genes that endowed her with that tenderness were “selfish” in a metaphorical sense, but by no stretch of the imagination is she being selfish.

Nor does reciprocal altruism — the evolutionary rationale behind fairness — imply that people do good deeds in the cynical expectation of repayment down the line. We all know of unrequited good deeds, like tipping a waitress in a city you will never visit again and falling on a grenade to save platoonmates. These bursts of goodness are not as anomalous to a biologist as they might appear.

In his classic 1971 article, Trivers, the biologist, showed how natural selection could push in the direction of true selflessness. The emergence of tit-for-tat reciprocity, which lets organisms trade favors without being cheated, is just a first step. A favor-giver not only has to avoid blatant cheaters (those who would accept a favor but not return it) but also prefer generous reciprocators (those who return the biggest favor they can afford) over stingy ones (those who return the smallest favor they can get away with). Since it’s good to be chosen as a recipient of favors, a competition arises to be the most generous partner around. More accurately, a competition arises to appear to be the most generous partner around, since the favor-giver can’t literally read minds or see into the future. A reputation for fairness and generosity becomes an asset.

Now this just sets up a competition for potential beneficiaries to inflate their reputations without making the sacrifices to back them up. But it also pressures the favor-giver to develop ever-more-sensitive radar to distinguish the genuinely generous partners from the hypocrites. This arms race will eventually reach a logical conclusion. The most effective way to seem generous and fair, under harsh scrutiny, is to be generous and fair. In the long run, then, reputation can be secured only by commitment. At least some agents evolve to be genuinely high-minded and self-sacrificing — they are moral not because of what it brings them but because that’s the kind of people they are.

Of course, a theory that predicted that everyone always sacrificed themselves for another’s good would be as preposterous as a theory that predicted that no one ever did. Alongside the niches for saints there are niches for more grudging reciprocators, who attract fewer and poorer partners but don’t make the sacrifices necessary for a sterling reputation. And both may coexist with outright cheaters, who exploit the unwary in one-shot encounters. An ecosystem of niches, each with a distinct strategy, can evolve when the payoff of each strategy depends on how many players are playing the other strategies. The human social environment does have its share of generous, grudging and crooked characters, and the genetic variation in personality seems to bear the fingerprints of this evolutionary process.

Is Morality a Figment?

So a biological understanding of the moral sense does not entail that people are calculating maximizers of their genes or self-interest. But where does it leave the concept of morality itself?

Here is the worry. The scientific outlook has taught us that some parts of our subjective experience are products of our biological makeup and have no objective counterpart in the world. The qualitative difference between red and green, the tastiness of fruit and foulness of carrion, the scariness of heights and prettiness of flowers are design features of our common nervous system, and if our species had evolved in a different ecosystem or if we were missing a few genes, our reactions could go the other way. Now, if the distinction between right and wrong is also a product of brain wiring, why should we believe it is any more real than the distinction between red and green? And if it is just a collective hallucination, how could we argue that evils like genocide and slavery are wrong for everyone, rather than just distasteful to us?


(Page 7 of 8)

Putting God in charge of morality is one way to solve the problem, of course, but Plato made short work of it 2,400 years ago. Does God have a good reason for designating certain acts as moral and others as immoral? If not — if his dictates are divine whims — why should we take them seriously? Suppose that God commanded us to torture a child. Would that make it all right, or would some other standard give us reasons to resist? And if, on the other hand, God was forced by moral reasons to issue some dictates and not others — if a command to torture a child was never an option — then why not appeal to those reasons directly?

This throws us back to wondering where those reasons could come from, if they are more than just figments of our brains. They certainly aren’t in the physical world like wavelength or mass. The only other option is that moral truths exist in some abstract Platonic realm, there for us to discover, perhaps in the same way that mathematical truths (according to most mathematicians) are there for us to discover. On this analogy, we are born with a rudimentary concept of number, but as soon as we build on it with formal mathematical reasoning, the nature of mathematical reality forces us to discover some truths and not others. (No one who understands the concept of two, the concept of four and the concept of addition can come to any conclusion but that 2 + 2 = 4.) Perhaps we are born with a rudimentary moral sense, and as soon as we build on it with moral reasoning, the nature of moral reality forces us to some conclusions but not others.

Moral realism, as this idea is called, is too rich for many philosophers’ blood. Yet a diluted version of the idea — if not a list of cosmically inscribed Thou-Shalts, then at least a few If-Thens — is not crazy. Two features of reality point any rational, self-preserving social agent in a moral direction. And they could provide a benchmark for determining when the judgments of our moral sense are aligned with morality itself.

One is the prevalence of nonzero-sum games. In many arenas of life, two parties are objectively better off if they both act in a nonselfish way than if each of them acts selfishly. You and I are both better off if we share our surpluses, rescue each other’s children in danger and refrain from shooting at each other, compared with hoarding our surpluses while they rot, letting the other’s child drown while we file our nails or feuding like the Hatfields and McCoys. Granted, I might be a bit better off if I acted selfishly at your expense and you played the sucker, but the same is true for you with me, so if each of us tried for these advantages, we’d both end up worse off. Any neutral observer, and you and I if we could talk it over rationally, would have to conclude that the state we should aim for is the one in which we both are unselfish. These spreadsheet projections are not quirks of brain wiring, nor are they dictated by a supernatural power; they are in the nature of things.

The other external support for morality is a feature of rationality itself: that it cannot depend on the egocentric vantage point of the reasoner. If I appeal to you to do anything that affects me — to get off my foot, or tell me the time or not run me over with your car — then I can’t do it in a way that privileges my interests over yours (say, retaining my right to run you over with my car) if I want you to take me seriously. Unless I am Galactic Overlord, I have to state my case in a way that would force me to treat you in kind. I can’t act as if my interests are special just because I’m me and you’re not, any more than I can persuade you that the spot I am standing on is a special place in the universe just because I happen to be standing on it.

Not coincidentally, the core of this idea — the interchangeability of perspectives — keeps reappearing in history’s best-thought-through moral philosophies, including the Golden Rule (itself discovered many times); Spinoza’s Viewpoint of Eternity; the Social Contract of Hobbes, Rousseau and Locke; Kant’s Categorical Imperative; and Rawls’s Veil of Ignorance. It also underlies Peter Singer’s theory of the Expanding Circle — the optimistic proposal that our moral sense, though shaped by evolution to overvalue self, kin and clan, can propel us on a path of moral progress, as our reasoning forces us to generalize it to larger and larger circles of sentient beings.

Doing Better by Knowing Ourselves

(Page 8 of 8)

Morality, then, is still something larger than our inherited moral sense, and the new science of the moral sense does not make moral reasoning and conviction obsolete. At the same time, its implications for our moral universe are profound.

At the very least, the science tells us that even when our adversaries’ agenda is most baffling, they may not be amoral psychopaths but in the throes of a moral mind-set that appears to them to be every bit as mandatory and universal as ours does to us. Of course, some adversaries really are psychopaths, and others are so poisoned by a punitive moralization that they are beyond the pale of reason. (The actor Will Smith had many historians on his side when he recently speculated to the press that Hitler thought he was acting morally.) But in any conflict in which a meeting of the minds is not completely hopeless, a recognition that the other guy is acting from moral rather than venal reasons can be a first patch of common ground. One side can acknowledge the other’s concern for community or stability or fairness or dignity, even while arguing that some other value should trump it in that instance. With affirmative action, for example, the opponents can be seen as arguing from a sense of fairness, not racism, and the defenders can be seen as acting from a concern with community, not bureaucratic power. Liberals can ratify conservatives’ concern with families while noting that gay marriage is perfectly consistent with that concern.

The science of the moral sense also alerts us to ways in which our psychological makeup can get in the way of our arriving at the most defensible moral conclusions. The moral sense, we are learning, is as vulnerable to illusions as the other senses. It is apt to confuse morality per se with purity, status and conformity. It tends to reframe practical problems as moral crusades and thus see their solution in punitive aggression. It imposes taboos that make certain ideas indiscussible. And it has the nasty habit of always putting the self on the side of the angels.

Though wise people have long reflected on how we can be blinded by our own sanctimony, our public discourse still fails to discount it appropriately. In the worst cases, the thoughtlessness of our brute intuitions can be celebrated as a virtue. In his influential essay “The Wisdom of Repugnance,” Leon Kass, former chair of the President’s Council on Bioethics, argued that we should disregard reason when it comes to cloning and other biomedical technologies and go with our gut: “We are repelled by the prospect of cloning human beings . . . because we intuit and feel, immediately and without argument, the violation of things that we rightfully hold dear. . . . In this age in which everything is held to be permissible so long as it is freely done . . . repugnance may be the only voice left that speaks up to defend the central core of our humanity. Shallow are the souls that have forgotten how to shudder.”

There are, of course, good reasons to regulate human cloning, but the shudder test is not one of them. People have shuddered at all kinds of morally irrelevant violations of purity in their culture: touching an untouchable, drinking from the same water fountain as a Negro, allowing Jewish blood to mix with Aryan blood, tolerating sodomy between consenting men. And if our ancestors’ repugnance had carried the day, we never would have had autopsies, vaccinations, blood transfusions, artificial insemination, organ transplants and in vitro fertilization, all of which were denounced as immoral when they were new.

There are many other issues for which we are too quick to hit the moralization button and look for villains rather than bug fixes. What should we do when a hospital patient is killed by a nurse who administers the wrong drug in a patient’s intravenous line? Should we make it easier to sue the hospital for damages? Or should we redesign the IV fittings so that it’s physically impossible to connect the wrong bottle to the line?

And nowhere is moralization more of a hazard than in our greatest global challenge. The threat of human-induced climate change has become the occasion for a moralistic revival meeting. In many discussions, the cause of climate change is overindulgence (too many S.U.V.’s) and defilement (sullying the atmosphere), and the solution is temperance (conservation) and expiation (buying carbon offset coupons). Yet the experts agree that these numbers don’t add up: even if every last American became conscientious about his or her carbon emissions, the effects on climate change would be trifling, if for no other reason than that two billion Indians and Chinese are unlikely to copy our born-again abstemiousness. Though voluntary conservation may be one wedge in an effective carbon-reduction pie, the other wedges will have to be morally boring, like a carbon tax and new energy technologies, or even taboo, like nuclear power and deliberate manipulation of the ocean and atmosphere. Our habit of moralizing problems, merging them with intuitions of purity and contamination, and resting content when we feel the right feelings, can get in the way of doing the right thing.

Far from debunking morality, then, the science of the moral sense can advance it, by allowing us to see through the illusions that evolution and culture have saddled us with and to focus on goals we can share and defend. As Anton Chekhov wrote, “Man will become better when you show him what he is like.”



Steven Pinker is the Johnstone Family Professor of Psychology at Harvard University and the author of “The Language Instinct” and “The Stuff of
Thought: Language as a Window Into Human Nature.”



http://www.nytimes.com/2008/01/13/magazine/13Psychology-t.html?_r=1&oref=slogin

jueves, 10 de enero de 2008

¿Un terrorismo no político?

08.01.08
AURELIO ARTETA

El Correo


Cuando se trata de deslegitimar el terrorismo, ya sabemos que los partidos nacionalistas no van a colaborar lo que se dice con entusiasmo. La ambigüedad y la tibieza serán su ánimo obligado. El Grupo Socialista en el Parlamento vasco tiene, pues, toda la razón cuando recientemente denunció las insuficiencias del texto del 'Plan de Educación para la paz'. Pero este mismo grupo se equivoca lastimosamente cuando declara que «no puede asumir que a la violencia terrorista se la denomine 'violencia de motivación política'» y que «nos negamos rotundamente a que la violencia terrorista pueda ser considerada de carácter político. Los socialistas no nos vamos a prestar a dar legitimidad política al terrorismo de ETA. El terrorismo es terrorismo y punto». Poco hemos avanzado en el conocimiento de nuestra tragedia si todo lo que se nos ocurre ahora se resume en tanta confusión y en esa tonta tautología...

Si con ello quiere indicarse que el terrorismo -cualquiera que sea la causa política que invoque- siempre es inhumano e injustificable, estamos de acuerdo. Pero, ¿acaso su violencia podría definirse sin nombrar su carácter y motivación políticos? Terrorismo es toda violencia pública que, desde ciertas premisas ideológicas y gracias al miedo que infunde entre la población, pretende obtener del gobierno algún logro político. Reconocer en nuestro terrorismo estos elementos objetivos ni le otorga legitimidad alguna ni disminuye un ápice la intensidad de su condena. Al contrario, le puede privar más todavía de esa legitimidad y, sobre todo, nos permite comprenderlo mucho mejor y medir nuestra responsabilidad ante él.

Sucede algo parecido con quien todavía se indigna cuando oye calificar a nuestra banda terrorista de grupo político o a los etarras en la cárcel, de presos políticos. Pues claro que lo son y a la vista está. 'Pero es que se trata sólo de criminales...'. Digamos mejor criminales políticos, porque esa criminalidad no altera su naturaleza primordialmente política. Al contrario, lo principal para ellos son las metas y sus justificaciones; lo secundario (aunque sea su rasgo distintivo y más infame) son sus instrumentos, o sea, los atentados mortales. Son criminales por razones políticas y eso, la causa pública por la que siguen matando, vuelve sus crímenes aún más horrendos y a ellos mismos mucho más despreciables. ¿Por qué? Porque esa causa pública es ilegítima, democráticamente rechazable, aunque se defendiera por vías pacíficas. El adjetivo 'políticos' que cuadra a estos asesinos y a sus asesinatos no debe entenderse, así pues, como una disculpa (que sin duda es lo que buscan sus compañeros nacionalistas), sino como una agravante.

Para empezar, ese carácter político marca la diferencia específica de sus delitos frente a los crímenes comunes. ¿Es que aún no percibimos las insalvables diferencias entre el crimen del amante despechado y el crimen del terrorista de ETA? Mientras aquél se comete en nombre y beneficio exclusivo del criminal, el último se lleva a cabo en nuestro propio nombre como vascos y con miras a un objetivo público: coaccionar al Gobierno para obtener la secesión política. Por eso el ideal de los delitos privados es el secreto, en tanto que lo propio de los públicos es exigir máxima publicidad. A quien mata para apoderarse de lo ajeno no se le ocurre invocar las razones públicas que, en cambio, el terrorista esgrime en su justificación. El crimen ordinario tampoco reclama la ayuda de los vecinos ni suele suscitar otra cosa que la repulsa general, pero nuestros criminales han contado durante casi cuarenta años con la simpatía y colaboración de una parte nutrida de la sociedad vasca (y de cierto 'progresismo' español y europeo). Complicidad activa de bastantes, complicidad pasiva y silenciosa de muchos más. ¿De verdad que no interesa su carácter político?

De modo que, por contraste con el asesinato privado, el público o terrorista no afecta sólo en nuestra sociedad a quienes lo padecen en su carne (las víctimas primarias y su círculo familiar), sino a todos. Los que no estemos de parte del asesino ya somos sus víctimas indirectas, aunque sólo fuera porque sufrimos sus efectos políticos. De esta clase de crímenes, pues, no tenemos derecho a zafarnos. Se han cometido con vistas a implantar una nueva unidad política que nos cuenta ya entre sus miembros futuros. Insistir en que el terrorismo tiene una inspiración política obliga al ciudadano a pronunciarse sobre esa inspiración. Es decir, no sólo a repudiarlo, sino a juzgar también la justicia de la causa política a la que sirven, el mayor o menor fundamento de la legitimidad que aducen. Claro que preguntarse por el grado de equidad de los fines, además de la condena inmediata de sus medios terroristas, tiene derivaciones molestas. Tan molestas, que preferimos ahorrarnos las preguntas para no incurrir en las iras del nacionalismo 'moderado'.

Pues si ya el propio propósito subyacente al terrorismo pareciera inicuo a los ojos de la razón pública, dado que entrañaría la ruptura en dos de una sociedad y el sometimiento de una parte a la otra; o si carece de fundamento democrático defendible, por asentarse en premisas etnicistas y contrarias a la común ciudadanía..., la gravedad del crimen terrorista sería aún mayor si cabe. A la maldad de los medios habría que añadir entonces la perversión de las premisas que los fundan y de las metas a cuyo logro se orientan. Los tribunales condenan penalmente a estos criminales por sus crímenes, pero a todos nos toca arriesgarnos a juzgar y condenar también la doctrina y objetivos que les inspiran esos crímenes.

Dejar de lado ese carácter haría sin duda mucho más sencillo el llamado 'proceso de paz' y, asimismo, la 'educación para la paz'. Eso sí, al precio de desnaturalizar al terrorismo vasco, desconocer la profunda inmoralidad de sus pretensiones y cerrar los ojos a la responsabilidad colectiva que por él nos toca. Como se instale la creencia de que lo malvado estriba nada más que en derramar sangre, sólo unos pocos serían culpables: los criminales y, a lo sumo, sus cómplices inmediatos; todos los demás, unos santos inocentes. Puras ganas de engañar y engañarse que el PSE no debería consentir. Si es que quiere, como dice, que la deslegitimación del terrorismo sea «ética, social y política» a un tiempo.


http://www.elcorreodigital.com/vizcaya/20080108/opinion/terrorismo-politico-aurelio-arteta-20080108.html

Pakistán, democracia militar

08.01.08
PEDRO BAÑOS BAJO

El Correo


La muerte violenta de Benazir Bhutto ha añadido combustible a un Pakistán ya altamente inflamable. Ninguna potencia puede sustraerse al juego geopolítico que en estos momentos se lleva a cabo en este complejo lugar del planeta, en el que al histórico cruce de caminos se añade ahora su posición privilegiada en el flujo de recursos energéticos. Pakistán es, además, cuna de un extremismo religioso cada vez más virulento, y con posibilidad de acceso a armamento nuclear, lo que hace que todos los ojos estén fijos en cada acontecimiento que allí sucede. Y para poder acertar en las decisiones que se adopten, hay que contar con un jugador de excepción: las Fuerzas Armadas paquistaníes.

Para unos, Pakistán constituye una pseudo democracia; para otros, una clara dictadura militar; y algunos lo consideran como una democracia tutelada (por el Ejército). Teóricamente, hay elecciones libres y un sistema de partidos, existe la separación de poderes, se ha procurado la diferenciación entre la jefatura del Estado y la del Gobierno, y el Parlamento bicameral es una realidad. Pero lo cierto es que los militares han ejercido, desde su independencia en 1947, una influencia capital en la política interna. La justificación pasa por la percepción de permanente amenaza que ha albergado Pakistán desde su nacimiento, especialmente proveniente de India pero también de Afganistán por su problemática frontera, que divide arbitrariamente tradicionales etnias y tribus.

La élite del estamento militar se ha ido haciendo con buena parte del control de la política y de la economía del país, hasta llegar a la situación actual, en la que ninguna decisión de calado ve la luz sin el visto bueno de los generales. Sin duda, las Fuerzas Armadas constituyen un imperio económico, ejerciendo una labor industrial-económica-financiera de gran magnitud. Se calcula que el valor de sus activos podría alcanzar más de 30.000 millones de euros. El entramado supone un tercio de la industria pesada del país, al tiempo que dispone de bancos, seguros, inmobiliarias, terrenos, empresas de transportes y hasta de una aerolínea.

La oligarquía militar atesora una educación que nada tiene que envidiar a la de los mejores ejércitos occidentales. Su perfeccionamiento en otros países, unido a su tradicional pertenencia a las clases más favorecidas (mayoritariamente punjabíes), hace que muestre una educación, cultura y estilo de épocas pasadas. Su formación impide, como norma, que el extremismo religioso cale entre sus filas. Y su principal intención es mantener su posición de privilegio en una sociedad que, en general, acepta su fortaleza como garante de la estabilidad y seguridad nacionales, y en cuyo seno se sienten cómodos por ser tan temidos como respetados. Lo que hará que se defiendan cual animal acosado ante cualquier intento de reducir su casi ilimitado poder.

La disyuntiva capital a la que se enfrenta el Ejército es verse forzado a combatir a extremistas religiosos, a los mismos que no hace tanto tiempo fueron entrenados, financiados y apoyados por los propios militares. La mayoría de la población paquistaní es musulmana suní (el 80%), influenciada por la escuela jurídica hanafita y que hasta hace treinta años se había manifestado como tolerante. Fue precisamente el apoyo a los extremistas contrarios a la presencia soviética en Afganistán lo que propició su radicalismo, con claras tendencias wahabíes de ascendencia saudí haciéndose cada vez más influyente y con acciones más virulentas; especialmente entre la comunidad pastún, formada por 41 millones de personas a ambos lados de la frontera paquistaní y afgana.

Es preciso comprender la estrategia con la que Pakistán se defendería de una hipotética invasión de su territorio por parte de India. Una vez descartado el empleo del arma nuclear, y ante la abrumadora superioridad del Ejército de India, Pakistán reaccionaría retirándose hacia la frontera con Afganistán e, incluso, entrando en su territorio. Desde ahí lanzaría una contraofensiva mediante el empleo de acciones asimétricas, en las que son verdaderos maestros, apoyados precisamente por extremistas religiosos, dispuestos a cualquier acción, también al suicidio activo, para expulsar al infiel invasor. Lo que causa un superior efecto de disuasión en India al del propio armamento nuclear.

Para la mayoría de musulmanes paquistaníes, los lazos religiosos priman sobre los políticos. Esto crea una situación delicada que pudiera tener graves repercusiones de seguir viéndose obligado el Ejército, como ha sucedido en fechas recientes, a enfrentarse a la población civil, lo que se podría volver en contra del propio Gobierno y propiciar la ascensión de otro militar al poder. Para prevenir esta hipotética situación, Musharraf traspasó la dirección de las Fuerzas Armadas al General Kiyani, cuya designación estuvo motivada por dos razones fundamentales: su teórica lealtad al presidente y su procedencia del todopoderoso servicio de inteligencia (ISI), del que fue director general.

Este bagaje garantiza la continuidad de la línea más tradicional del Ejército, firmemente respaldado por el omnipresente servicio secreto, con lo que Musharraf se podría considerar a salvo de imprevistos. Aunque tampoco se debe olvidar que no sería el primer presidente de Pakistán que fuera 'dimitido' de modo expeditivo. Todo depende de cómo afecte su política a los intereses militares. Para preservar su supremacía política, el Ejército también va a hacer valer sus cartas geoestratégicas. En principio, su preferencia parece decantarse hacia su tradicional aliado, Estados Unidos. Pero, de sentirse presionados con las exigencias de aceleración del proceso democratizador, no es desdeñable que tomaran muy en serio ofertas provenientes de otras potencias, e incluso de la propia Organización de Cooperación de Shangai (encabezada por China y Rusia), a la que muchos ya llaman la 'anti-OTAN'. Potencias que estarían encantadas de ver mermadas las expectativas de EE UU en esta delicada zona del mundo.


http://www.elcorreodigital.com/vizcaya/20080108/opinion/pakistan-democracia-militar-pedro-20080108.html