Friday, October 01, 2004

Circul ideilor-continuare

In continuarea acestei rubrici, (continuata aici)citeva noi intrari .

Un lucru pe care nu il stiu prea mult editorii nostri este spus de Lindsay Waters (directorul executiv al "humanioarelor" la Harvard University Press) aici.

Citeva idei interesante despre stiinta moderna si cum sa rezolvi neajunsurile ei in articolul din wired de mai jos, intitulat Scientific Method Man (care pare a ne fi raspuns multiplelor intrebari ridicate aici) din al carui cuprins am selectat urmatorele:

This "expertise gap" is rife in academia, but few recognize it, let alone know how to correct for it. It starts with the best of intentions. Institutions want top-notch people, so they offer incentives to attract and groom experts. Young grad students learn early that if they want to carve out a niche, they must confine their interests to a narrow field. It's not enough to work in spinal cord regeneration; it must be stem cell-based solutions to the problem. That's great if a researcher just happens to stumble on a perfect stem cell cure. But as specialists get further from their core expertise, the possible solutions - what's been tried, what hasn't, what was never properly examined, what ought to be tried again - get even more elusive.
With the verifier approach, Rugg begins by asking experts to draw a mental map of their field. From there, he stitches together many maps to form an atlas of the universe of knowledge on the subject. "You look for an area of overlap that doesn't contain much detail," he says. "If it turns out there's an adjoining area which everyone thinks is someone else's territory, then that's a potential gap."
His approach is built on the observation, noted as far back as the 1970s, that experts tend to cut to the chase. In their zeal to get to an answer, they make many little mistakes. (A recent study of work published in Nature and British Medical Journal, for example, found that 11 percent of papers had serious statistical errors.) Experts unknowingly fudge logic to square data with their hypotheses. Or they develop blind spots after years of working in isolation. They lose their ability to take a broader view. If all this is true, he says, think of how much big science is based on flawed intuition.
His work bridges a number of specialties. One of the tools in the toolkit, as he says, is a field called judgment and decisionmaking. Psychological studies suggest that experts, defined as someone with 10 years in a discipline, don't have any more reasoning power than the rest of us. What they have is tons of experience. An old doctor, for instance, has seen so many cases of the mumps that he no longer follows methodical reasoning to arrive at a diagnosis. He instead uses a shortcut called pattern-matching: face red and swollen - mumps. Next!

Call it what you like - a hunch, an opinion - pattern-matching is iffy. "Sequential reasoning is formal, almost mathematical," Rugg says as we settle down in the campus cafeteria over mochas. "If this, then that. Pattern-matching is fast and efficient. The doctor knows what's wrong with you before you do. That's fine if he's right, but he can be miscued."

Besides pattern-matching incorrectly, experts sometimes misunderstand critical terms. People in similar but specialized fields will find it hard to communicate. A hydrogeologist and a petroleum engineer both took Geology 101, yet their visceral understanding of the word rock may be wildly divergent. Luckily, psychologists have developed a battery of methods - called elicitation techniques - to draw out and define what experts know.

As a journeyman researcher in expert reasoning, Rugg ferreted out errors and unseen problems in various industrial and office management cases. It was good work for a human-error psychologist, but he wanted to tackle bigger issues. If experts were making mistakes in doctor's offices and factories, he reasoned, they were making them in labs, too. "My gut feeling is that a lot of research involves pattern-matching," Rugg says. "It guides what is investigated and then the design of the study."

Sometime in 1996, while having lunch with colleague Joanne Hyde, it occurred to Rugg that he could pull together all the tools psychologists use - elicitation techniques, the vast literature on human error, decisionmaking models, formal logic and reasoning - to create a novel form of problem-solving: a scientific method to verify the methods of science.

The verifier method boils down to seven steps: 1) amass knowledge of a discipline through interviews and reading; 2) determine whether critical expertise has yet to be applied in the field; 3) look for bias and mistakenly held assumptions in the research; 4) analyze jargon to uncover differing definitions of key terms; 5) check for classic mistakes using human-error tools; 6) follow the errors as they ripple through underlying assumptions; 7) suggest new avenues for research that emerge from steps one through six.

Experts want to believe that their domain is unique, requiring specialized tools, approaches, and thinking. Rugg was saying no, you could use one kit to solve a million problems, in many fields. Many experts would object to such a theory, but computer scientists were intrigued. "In computing, we're interested in understanding knowledge," says Bashar Nuseibeh, a UK researcher whose work was influential in the development of the Unified Modeling Language. "Gordon is asking, Can you look at the commonality between two domains of research and solve problems within them with a single approach? I don't know. It's a theory."

In New Scientist un dialog cu Jamie Whyte, filozof furios pe atitudinea necritica a contemporanilor despre probleme extrem de actuale si in Romania.
Din cuprins:

How long have you been angry about bad thinking?
I've always been obsessed with truth. I did my PhD on truth. It has always driven me mad to see people saying things that are well known to be rubbish. And I've never understood how they can bear it. But at the same time I can see that it doesn't affect their lives materially so they can't understand why I get so hysterical.

...the authority fallacy, which these days appears in a variety of perverse forms. One of the worst is trusting someone simply because they have acquired a measure of celebrity, which might include publishing a book.

What do you mean?

First, there is nothing wrong with deferring to genuine expertise. You need to defer to some people because you simply can't do all the research on everything yourself. But to whom should you defer? The basis on which you defer to people should be that they are reliable, and by reliable I don't mean nice or good or that they have their hearts in the right places. I mean that if they say that P... it is very likely that P... Now what makes somebody reliable is the way that they acquire their beliefs - ultimately it all comes back to the correct methods for acquiring beliefs. So you should identify people who are doing it the right way and defer to them. In the end, it is just a division of labour.

But how does the authority thing work?

The really big mistake comes when you treat people as authority figures when they are not expert but simply well known. There is a terrible tendency to treat people as reliable sources of fact when in fact they are simply "important" people or people who happen to be in the news. It is doubly perverse when you consider who gets counted as "important". For example, the victims of train accidents appear on television as authorities on rail policy and celebrities endorse presidential campaigns as though they are expert on politics. It's sheer insanity.

How widespread is this tendency to seek unnecessary explanations?

It is well known that when gamblers go wrong they find an excuse and as soon things go right they immediately assign it to their own brilliance and insight rather than finding an accidental reason. It's rather similar in the financial industry. Even the bosses buy into this kind of reasoning. They will say "of course I understand why that one went wrong" when they lose millions, and then when it goes well they will say "well done". Everybody systematically overestimates their skill in games of chance. From what research I have seen, financial trading is not much more than a game of chance. There are funds that simply track the market according to a set of simple rules, and others that are very actively managed. But the actively managed ones do not perform better on average. Some will do well in any year but that's what you expect by chance.

Un alt articol "despre adevar", care merita o privire, , a aparut in "the chronicle" Who Cares About the Truth?

In sfirsit,in Vanity Fair, avem David Margolick despre "Litigation" care au insotit alegerile americane din 2000, partea intii si partea a doua

In Comment avem Westernization or clash of civilizations?

In Digital Divide, despre cartea lui Tim Berners-Lee: Weaving a Semantic Web

In "The New York Sun" un op-ed despre plagiarism, "pacatul zilei" la ei si al veacului la noi

In The New York Time magazine despre impactul bloggerilor in arena politica (in ceea ce ma priveste aproape mai toate comentariile mi le iau de acolo)

In The Threepenny Review , despre filozofi ca "bataie de cap"

si in blogul "be spacific" , al Sabrinei Pacifici un link catre eseul lui Daniel J. Solove, "Reconstructing Electronic Surveillance Law"


Pe site-ul U Cornell anuntul unei Conferinte intre 8-9 Octombrie in onoarea "Spiritului Capitalismului" al lui Weber "The Norms, Beliefs and Institutions of Capitalism: Celebrating Max Weber's The Protestant Ethic and the Spirit of Capitalism."


Jean-Louis Halpérin, probabil istoricul legal francez cel mai important al momentului a scris o istorie de mare cuprindere a dreptului in Europa din 1750 incoace, "Histoire des Droits en Europe", disponibila la Amazon

In the Independent, o recenzie a cartii lui George Tsebelis,Veto Players: How Political Institutions Work (Princeton University Press, 2002)facuta de Michael Munger(Duke)


Post a Comment

<< Home