Sunday, December 28, 2014

The worst of peer reviewing

Have you ever been submitted to the following situation?
You submit a manuscript to a given magazine, sometimes you get a direct rejection (by the editor, using arguments like "this magazine receives too many manuscripts and we cannot publish all of them") or it enters in reviewing and after one month or so, you get a very nasty review, telling your work is nonsense (not to use another more colorful substantive). Then you decide you had been humiliated enough and send the same manuscript to another magazine, sometimes with a better impact factor and after some months more you receive a nice review, perhaps pointing out some mistakes you made, but finally approving the publication. What happened, wasn't the work nonsense?
Once in a joint dinner with some academic friends I placed the following question:
-"Did you ever approved a manuscript as you received?"
I meant, without asking some correction, some change, some new experiment. Of course everybody kept still.
The two problems are linked. In an ideal world, the peer who reviews your manuscript should be perfectly impartial and just. I try to be in the reviews I write (I'm not saying I achieve this goal). In fact, reviewers are humane and are subject to psychology as anyone else. The position of power over other authors sometimes is too strong to resist. The number of manuscripts increased considerably in the last 50 years and the same happened with the number of magazines. This means less prepared editors and reviewers.
What could be done? It is not possible to overload good reviewers more than what is done now, and reviewers should be trained as well. I read a proposal once in LinkedIn, the reviewers should receive a guideline, questions to be answered about the manuscript, to help performing the task. Other important thing would be that the editors take their roles more seriously. Bad reviews should not be considered in evaluating a manuscript. This means the editor should, at least, read the reviews before forwarding to the author.

Wednesday, October 8, 2014

The role of evaluation in superior education

This post is the second on a trilogy about my reflections on my own teaching activity at the University of São Paulo (USP). Now I want to discuss evaluation, in the sense of that evaluation we perform on our students, to "measure" their learning.

The reader must be aware that in my country evaluation is practically a synonym of (written) tests and grades. We adopt in most of the education levels, a numerical system which runs from 0 to 10. At USP, the student must attain 5.0 in a discipline to be considered "approved" (otherwise he, or she, is considered "reproved" and has to attend again the lectures in another semester).

This value is accessed typically by written tests applied along the semester, which, at least in mathematics, physics and engineering, consist of a set of (traditionally 3 or 4) problems that the student has to solve, normally in a period of 100 minutes and without consultation of the colleagues, written material or personal notes. Each of these problems have a standard "value", the professor corrects the problem attributing this value or a partial one (if the student didn't succeed completely) and the "total value" of the test is the summation of the partial values of the problems. The final grade is calculated as some average of the individual tests. A student who didn't succeed attaining the 5.0 in a given discipline in most of the cases has to repeat the discipline, meaning, attending the lectures and the tests a second time, a third time, until he gets his 5.0.

I do not like this system. In my opinion it mostly accesses the student's capacity to remember something or his (or her) capacity to work under stress conditions. Of course, both things are worth accessing, but not every time. In some conditions, however, tests are unavoidable. We have disciplines with up to 70 student at USP. A personalized subjective evaluation is very difficult in this context.

The numerical system is pernicious. Its effect goes beyond the individual disciplines. For example, at USP we calculate two kinds of "average grades", called "clean" and "dirty". They are weighted averages, respectively, of the approved disciplines or of all disciplines the student attended (the weight factor is the number of credits, meaning lecture hours, the discipline has). This averages are used, for example, to select students for our international exchange programs.

I try to be fair in the tests I write. I believe my tests are so constructed that a student who reached the minimal goals of the discipline get a 5.0. I try to cover all the contents with my questions, but in many cases the student solves only part of the exercises,  "aiming" at 5.0. Of course, I get disappointed and they usually fail. My tests are not made to be only partially solved. I detected, however, a problem in the other side of the spectrum: the excellent student. It is a hell of difficulty to reach 10.0 in a individual test, what to say about three or four tests? I can count on my fingers the number of students who got a 10 in my tests and more than 400 students passed through me in these 13 years.

One could ask why this system is so prevalent. My answer is: because it is convenient.

It is convenient to the professor, in two ways. First, it may be easy to correct. Particularly in the natural sciences the problems have a numerical value. If the student obtained that value, you could consider the problem correctly solved. It is also convenient due to the fact that correction is objective. Either the student succeeded or not. The professor does not need to deal with the (messy) task to subjectively accessing the student's performance. It is also convenient to the student, since he knows what to expect from the evaluation and he can be prepared for it. For example, the student memorizes everything the day before the test, so he can retain the subject in the memory.

I try to reduce the impact of the written tests, or even remove them completely, in my disciplines. In one case, which usually get more students per class I use the (balanced) test, but I grant up to 2.0 points which I call "participation note" which I subjectively evaluate based on several criteria, but, as I always tell my students, one of the criteria is whether I can remember the face of the student at the end of the semester. I will not show the hubris to assume I always give the fairest evaluation, but at least I can get typically one or two students each year (out of 40) with 10 grade.

In a second discipline, which receives typically a lower number of students each year (it is an "optional" discipline, meaning the student chose to be there, and was not forced) I evaluate the students with on line tests in which he (or she) can have access to written material and personal notes and by group activities, in which I evaluate the participation of the student, and not the learned content. I realized the content comes automatically with the group activities, so participating in it is enough to get the content.

Sometimes I also run into trouble with the bureaucracy due to this. Once I had to renew the accreditation of an advanced discipline in the post-graduation review board. I stated in the form that I was practicing "continued evaluation", meaning I apply no special evaluation method, I evaluate the students on site, in real time, during the lectures. They are highly motivated students, mostly experienced researchers or engineers which return to university to get a master or a doctoral degree, so I have no need to apply any evaluation method. It is amazing how much they learn after I tell them this. The review board returned my forms, saying this was forbidden, and that I had to introduce some formal evaluation method. I returned the form stating I would apply a test, I just didn't explain that my test starts in the first minute of the first lecture and ends in the last minute of the last lecture.

In order to finish, I would like to share with you what I learned with my disciplines:

  1. Written tests should be reduced to a minimum, or, if possible, removed at all. If they are unavoidable, they have to be fair according to the faculty rules.
  2. Qualitative evaluation should not be feared, the professor will be wrong from time to time, but in subjects like "the student shall be approved or not", he will probably be right all the time.
  3. One should not fight bureaucracy, if your faculty requires you give a numerical evaluation of your students, do it, but be sure you are truly evaluating the student's performance, according to the rules of the faculty (in particular, a student who learned the minimal contents in my faculty should be rewarded with a 5.0, a student who reached 100% of the content, and they do exist, should receive 10)
  4. A professor should not fear to be qualitative in the evaluation, one may not be fair all the time, but he will get it right in most of the times and the students will respond, learning what you have to teach and not looking only for the grades.

Friday, September 26, 2014

On the Bragg-Williams-Gorsky model

I am writing a manuscript about ordered compounds stability and decided to produce a small paragraph about history of this subject. One of the topics of this paragraph is the Bragg-Williams-Gorsky model, which is a milestone in the investigation of the phenomena. For this I downloaded the original manuscript by the authors and was surprised by the insight it gave to me on this fascinating era of investigation in physical metallurgy. In particular, it corrected a mistaken notion I had, but I will return to the point later.
First of all, the reading of this work, plus a few other references, show that the current division of physics into experimentalists and theoreticians was non existent at that time, the works are mostly experimental works dealing with aspects of the ordering transformation like resistivity measurements, corrosion experiments, crystallographic investigations and even plastic deformation, with the authors developing some theoretical background in varying degrees. This proves a point I raised in a previous post in the blog, namely, that specialization in science is a modern (or even post-modern) phenomenon.
Bragg and Williams give credit to Tammann as the one who "first envisaged" the notion of ordered compounds in 1919. This is backed up by  previous publications by other authors, quoted in the manuscript. It is amazing to observe the careful way the authors dealt with a notion which is so common today, namely of the ordering/disordering transformations, which, today is a matter of introductory textbooks.
The authors also acknowledge previous work by other research groups, which predated their work. According to them, they became aware of these theoretical treatments after the publication of a first work in the series, these were works by Borelius, Johanson and Linde, by Gorsky and by Dehlinger and Graf.
Reading these papers, it becomes clear that the works by Borelius, Johanson and Linde (published in 1928) and by Dehlinger and Graf contained only part of the ideas which compose what we know today as the Bragg-Williams-Gorsky model, but the same is not true for the work by Gorsky, published in 1928 (which, therefore, predates the work by Bragg and Williams in seven years!), which gives a full derivation of the model in terms of the laws of statistical mechanics (this author, however, uses the ordering energy as energetic parameter for the calculation). This was the erroneous notion I had. Someone, I don't remember who, told me once that Gorsky's  contribution was minimal and limited to including the magnetic degrees of freedom in the model. This is wrong. Based on the description in the manuscript by Bragg and Williams, we may accept that the theory was derived in parallel by their group and by Gorsky in Leningrad, so, this author's contribution of the model is, at least, as important as the one of Bragg and Williams.
A further interesting note. Reading the work by Borelius, Johanson and Linde, I discovered that the authors discuss the famous formula:
which allows an interpretation of the stability of a compound in terms of the interatomic bonds, the compound would become stable if delta<0 meaning the compound becomes stable if the interactions between unlike atoms becomes stronger than in the pure componentes:
"Trotz der Einfachheit dieser Formel durfte sie doch von wenig Nutzen sein, weil der Anordnung der Paare benachbarter Atome wahrscheinlich keine wesentliche physikalische Bedeutung zukommt." (page 309) Translating: "In spite of its simplicity, this formula should be of little use, since the ordering of pairs of neighboring atoms present no important physical meaning". This view is similar to one that I defend, namely that the interactions energies are just a convenient way to write down the energy of a crystal, but we should not attempt to interpret them as having a physical origin.
So in summary, calling the model Bragg-Williams-Gorsky is not only an option, it is mandatory, for the sake of history. 

Sunday, September 21, 2014

The role of the student in superior education

My third post about superior education deals with its leading actor: the student. I have the honor and opportunity to work with the best applicants to an exact sciences carrier in Brazil: the "Politécnico". Nevertheless I need to exercise some critic.
First of all, there is a universe of distance between the student in the junior years and the same student in the late years of the course. Some years ago I attributed this to a lack of satisfaction with the chosen carrier, but nowadays I believe the students are also to blame. In my opinion the student, submitted for years to the evaluation system I described in my last post, give up learning, concentrating instead in obtaining the necessary grades for approval. I am being, probably, too severe. I have also good students in the classroom (I teach in the 7th and 8th semesters), but I just feel they are not as interested as in the first years.
One possible explanation for this, which was pointed out many times before, is that, paradoxically, our student is too good. Frequently he (or she) was the best student, the most intelligent, in his (her) class in the secondary school (the Brazilian equivalent to the American highschool, or the German Gymnasium). This student usually got good grades without the need to study hard (I know this because I was one of these students), the result is that he (or she) didn't learn how to study properly. When he (she) enters the Escola Politecnica, suddenly, there are others who are, as much or even more intelligent than him (her). Not every student can cope with this renewed situation. In addition, this student didn't learn how to study and the lectures in the first years of the Escola Politecnica (mostly Mathematics and Physics) are not as simple as the ones he attended before. This, added to the poorly prepared written tests, leads to the recipe of a tragedy.
I had the privilege to work with a brilliant student, Dr. Bruno Geoffroy Scuracchio. Today he is the responsible engineer for innovation in an autopart supplier industry. Our first contact was just after I got hired in the Escola Politecnica. I supervised him in a junior research project, the final course dissertation, a master dissertation and finally, the PhD thesis. I knew he had some trouble to finish his course, in particular, I knew he had to attend the "Integral and differential calculus III" discipline in the last year (this is originally a third semester discipline). Once, in an informal chat, I asked him how difficult it was and his answer surprised me: not difficult at all. He told me he attended the lectures to know what was the subject, solved the exercises corresponding to that subject, solved doubts with the professor, and used about two hours a week for studying, after doing all this he got the grade for approval already in the second test (out of three). Further, he said that if he did that the first time he attended that discipline, he would not be reproved. I thought that if we are not teaching our junior students this, we are doing a clumsy job.
This lack of discipline in studying is easier to show with an example. I am an enthusiast of network learning, as my good (virtual) friend, Prof. Ewout ter Haar defines (also known in my country as distance learning). Once, some years ago, I was responsible for coordinating a discipline with a large number of students (820). I decided to substitute  a written test by one applied online using the moodle system administered by Prof. ter Haar. I didn't know, but this was the first attempt to do this with such a large discipline. I gave five days per week as a deadline for the tests and monitored the number of students who solved them. One day Prof. ter Haar sent me the figure below, showing the bandwidth charges of the servers.
As one sees, there are some plateaus in the charge, this is not that difficult to understand, the student probably used the evenings to solve the test. The problems are the levels of these plateaus. Only about 220 students (out of 820) solved the first test within three days, about 530 students did it until the fourth day. The charge growth in the last day was faster than exponential. It was a hell of a crash test to the servers (and they survived). The interesting is to see that the plateau levels decrease with time, meaning more and more students left the test to solve in, literally, the last minutes of the deadline.
This last example is a characteristics of our students (in fact, of the Brazilians), to leave everything to do in the last minute, We should do a better job in teaching our students, especially in the first years of the course, to avoid this.

Saturday, August 2, 2014

The role of the professor in superior education

This post is based on an earlier one (in Portuguese) I wrote in the social network USP maintains.
It is based on an event that took place during a lecture of the discipline PMT2406-Mechanics of Metallic Materials, which is optional for the 10th semester (last) of the Metallurgical and Materials Engineering undergraduate courses at the Escola Politecnica da USP.
I was talking about a quite specialized subject, Deformation microstructures in fatigue, where I finish the discussion with the results of H. L. Huang, published about 10 years ago (for example, here).
It is a complex subject and, to my best knowledge, it is ignored by the fatigue community. I always felt justified, since I believe these results are very important.
I was finishing the explanation, so I needed a closure (no pun intended), so I said:
-"Why are you learning this?"
The few students, who were watching my explanation with full attention, looked at me in disbelief. I continued:
-"You will most probably neither hear something about this anymore, nor you will use this knowledge in the professional life."
If the students thought I was going crazy before, now they were sure. Before they could complain, I finished:
-"We learned the arguments, the mental processes the author used to reach these results."
I was doing this in real time, I am not sure if the students perceived this, or if they believed I planned that all along.
I felt relieved, I finally understood that what we teach is how to think about the subject and not the subject itself (in many cases the subject is only the tool in the learning process).
In my previous post I concluded realizing this explained why our students become excellent professional, even with the pitiful didactic most professors possess (myself included).
Here I want to extend a bit more. In the ECF20 I recently attended, there was a special discussion about teaching fracture mechanics. I remember one of the participants told that, instead of teaching how to use fracture mechanics in projects, he discussed how to derive the HRR field equations. I am sure he knows that this knowledge is not "useful" for the engineer, but he is interested in forming thinking engineers rather than automats.
For me these realizations are important, I finally got in peace with my teaching.

Saturday, July 5, 2014

Report from ECF20

The European Conference on Fracture is a large scale event, dedicated to fracture mechanisms, fracture mechanics and fatigue. It congregates famous names in the field and, in spite of the multiple parallel sections, presents also plenary lectures, which are attended by the whole comunity. I enjoy very much the ECF and recommend to everyone who investigates these themes. Next installment will take place in Catania, Italy in 2016 and then in 2018 in Belgrade, Serbia.

Friday, May 2, 2014

Superespecialization is a (post)modern disease

Every scientist knows a colleague who is expert in some subject. Probably this person is an erudite in this subject, and is able to discuss this matter with the other experts in that matter in all world. This person is, however, unable to discuss any other subject. The Germans have a term for this: Fachidiot (translated as something like "idiot specialists").
The colleagues in Pedagogy usually blame (at least here in Brazil) the Napoleonic university reform for this state of affairs and state that it is post-modernism that is changing this.
I always doubted this. If we go back to the XIX century and see the biography of people like Euler, Rankine, Faraday, Darwin, we see that they were far from being specialists in just one subject. They were all polymaths.
I read recently an article about Heisenberg and discovered, not as far as the early XXth century, that he had a hard time getting his title (Dr rer nat) because he had problems with experimental physics. So he was expected to understand things other than the subject of his thesis to obtain the title.
I guess the superespecialization is a recent phenomenon. If you are a university professor like me and does not wish ending up as a "Fachidiot" here is my advice: try teaching something outside your area of expertise. You will find out it is funny to learn something new and that, with your intelligence, you will discover connections with yout own research field, perhaps even something innovative.

Sunday, April 20, 2014

The Battle Between Zirconium Alloys and Stainless Steel as Cladding of Nuclear Reactor Components: Part One.

A worthy research has been performed at Escola Politécnica (EPUSP) of the University of São Paulo to process, compile, compare and advance the best literature data about metallurgical and physical properties of irradiated Zirconium alloys and austenitic Stainless Steels.
The nuclear reactor environment is very harmful to the cladding materials, pressure vessel and other major components present in its surroundings. Why? Fast and slow neutrons acts like a massive bullet when it impinges towards a crystalline structure. As effect of that collisions, the metal or alloy crystalline arrangement experiences a gradually degradation and the materials loses its main mechanical properties.
These efforts resulted in a new research group to integrate the several works that have been carried out at EPUSP in the field of Nuclear Materials: Advanced Nuclear Materials Research Center (NUCMAT).
In this blog we will share information about our research, results and data regarding studies that will be developed in the next months. The first paper is an upcoming work that we will publish in the congress of ABM (Associação Brasileira de Metalurgia, Materiais e Mineração) entitled: “Comparative Framework of Zirconium Alloys and Austenitic Stainless Steels Structural Integrity under Neutron Irradiation”.
We hope that you worldwide scientists post comment about our work giving suggestions for improvements.

Saturday, April 19, 2014

Inertia: the hidden field

I was travelling in a bus on the streets of São Paulo, a living laboratory of inertial forces, and I was wondering about the weirdest of all Newton's laws: the first.
Imagine the following situation: a closed room with a subject inside (let us call him the Schrödinger's cat for simplicity's sake). Without his knowledge this room is in fact a vehicle, which is perfectly insulated from vibration and all forms of sound. The room starts moving and begins a curve.
The cat, inside the room, will sense mysterious force field acting over his body.
The strange nature of inertia arises, in my opinion, that this "field" has no source. Gravity is generated by mass, the Coulomb field is generated by charge, even the nuclear strong qnd weak forces have their own souces. Inertia doesn't. However, this becomes ever weirder.
Back in college I wrote my first paper (for the Physics Lab., equivalent to Experimental Physics 101) on the subject of the identity between inertial and gravitational mass.
Inertial mass is the ratio berween a force acting over a body and the acceleration it produces, which can be precisely determined in an elastic colision experiment. Gravitational mass is the ratio between gravitational force and gravity acceleration, which is related to the spacetime curvature, which can be precisely determined in a free fall experiments. There is no sensate reason to assume that both things are the same.
I remember when I wrote that paper (30 years ago) someone (I believe it was Einstein, but I am not sure) tried to justify this equality using Mach's principle, which, as far as I could understand, meant that inertia is created by the gravitational attraction of all mass im the Universe (the reference frame of the fixed stars).The problem is, after relativity we know that the fixed stars are not even there where we see them. I see another problem with this idea: there is no time delay between the change in movement and the onset of inercial forces. If inertia originates from the interaction between the body and faraway masses, shouldn't this imteraction obey the restriction of the light speed?
I know I'm writing about quite specialized things I didn't study, and worse, writing with the memory of a 1st year physics bachelor student of 30 years ago, but anyway I believe there is something very fundamental about Nature hidden in these weird properties of inertia.

Tuesday, April 8, 2014

Public opinion polls

There will be elections in Brazil in 2014. This time we will elect the president, the state governors, part of the senate, the congress and the state assemblies. Due to Brazilian law, open campaign is still prohibited, but some movement already started. For example, everybody knows that president Dilma Roussef will apply for reelection and there are two declared "opposition" candidates. In this context the public opinion polls acquire a great importance. According to these polls, President Dilma is frankly favorite  to win the election in the first turn and her adversaries barely reach  16%  and 12% of the "vote intentions".

A recently published poll gained considerable attention in Brazil. Two different polls organized by the same institute (IBOPE, short for "Brazilian Institute for Public Opinion Research") gave different results concerning the same population, in one of the polls Pres. Dilma would have about 43% of vote intentions, while in the second  this number would be 38%.

There was a fuzz in the social networks about this result, first because the News Services (which are in the majority supporting the opposition) published headlines like "President Dilma fell in the IBOPE poll!". There was an outcry from Pres. Dilma's supporters because they  insinuated political use of the result (which is obvious), because the former poll range extended for a longer period and ended after the second poll (which gained full coverage from the press).

Apart from the political use of such results, one should look at the problem with a scientific point of view. What is a public opinion poll?

The answer to this question is: a statistical inference measure.

Let us consider what is in fact a poll. This process probes one population (the country voters) extracting a small sample and probes one question. Let us remain simple, let us suppose the question is binary, having only two possible outcomes. As every math, physics, engineering student learns, this problem is equivalent to probing a box containing a large number of pebbles colored black and white. Supposing the fraction of white peebles is p, that the sample size is N, the number n of white pebbles in the sample will be given by the Binomial probability distribution:

Let us look what this means for a typical sample size used in these polls (N=2600) and let us assume p=0.38 (that is, the population is composed of 38% of white pebbles). The result is given by the red line in the figure below.

 The fact that we measured p=0.38 means that we actually drew 988 white pebbles in the 2600 sized sample, but let us come back a little and ask before the measure what would be the probability to draw 988 pebbles provided p=0.38. This number is f=0.01612 (that is 1.6212%). Let us suppose now our measure of p is wrong and it is actually larger (it could be smaller too). I plotted in the figure (blue line) what would be the result if p=0.40. The probability to draw 988 white pebbles in this case would be f=0.00182 (or 0.182%). It looks small, but it is actually more than 1/10 of the ideal value. In fact, if we drew 100 N=2600 samples out of the box, any result between ~960 and ~1020 would be likely obtained in p=0.38.

Everybody who works with statistical inference knows this fact. In a statistical measure we will never be fully confident in the result. We will always run into the possibility to commit two types of errors, the first error, called "type I" error means that you accepts as true a proposition that is actually false, in the present case, that assuming p=0.38 is wrong. This is the origin of the "confidence interval" concept, which most laymen know as "error margin". There is, however, the type II error, which is rejecting a statement which is actually true, in the present case, rejecting p=0.40 after drawing 988 white pebbles. The type II error is more difficult to control and one could easily loose track of it if one tries to circumscribe the type I error to low probability values.

The determination of the confidence interval to this problem usually requires approximating the binomial distribution by the normal distribution (which is a good hypothesis in the present case) or maybe using more sophisticated methods, like using bayesian inference, but one should take care not to use the central limit theorem here, since the number of samples is actually 1.

Therefore the result of a public opinion poll is simply a (educated) guess. Its results must be analysed with care and never in the way described above (the "conclusion" that president Dilma is falling in the public preference). This use of statistics is simply political misuse. Naturally all this analysis is based on one premise, namely, that the sample is extracted from an homogeneous population. In my opinion this is one of the largest failures in the public opinion polls. I believe it is possible to fraud a public opinion poll by carefully choosing the place and the time in which the interviews are made. Naturally there are protocols which have to be followed, but even so, I believe one can direct the answers depending on the will of the institute. Naturally in a civilized country, a public opinion institute which resorts to this kind of strategy would end up loosing credibility, but the one sided nature of the Brazilian Press will surely ensure that any misuse of the public opinion polls will be unpunished.

Sunday, February 9, 2014

Multicausal failures

We all are aware of failures that are caused by a single event. From the point o view of engineering these are lucky cases, since controlling these events, the failure can be prevented. Engineering project usually assumes this hypothesis for this purpose. For example, the maximum stress level of a structure is calculated based on the yield or fracture stress of the individual components, some parts in airplanes are designed such that the cyclic stress intensity factor does not exceed the fatigue threshold level measured in a Paris plot.
There are, however, cases in which the failure is caused by multiple critical events happening in series or in parallel simmutaneously or not. The most famous example was the fire in the Kiss club in Santa Maria-RS, Brazil, last year. The causes ranged from corrupt city officials and firemen, who alloed the place to open without minimal safety conditions, greed by the owners, which led to bad material's selection of the foam used for acoustic insulation,which produced HCN when burnt, and the stupidity of the band members, who lit inappropriate fireworks in a closed space. Any of these events, if they were avoided, would prevent the tragedy too.This went surely through the mind of all people involved, but they surely decided that the probability of everything going wrong in the right sequence in the right time was too low to consider. The tragedy is there to prove they were wrong.
In fracture as Prof. Bažant teaches, this leads to two different failure propability distributions. In case of a single critical event (as in cleavage), the probability is described by the Weibull distribution. In the case the failure is a consequence of infinite individual critical events, like ductile fracture by microvoid coalescence, this leads to the gaussian distribution. There are also intermediate cases. The point here is to remind that multicausal failures do exist. They require nonlinear thinking by the engineer, who is forced to consider not only what could go wrong, but also in which sequence and in which time.
Worse, as I repeat to exhaustion to my students, when you decrese the probability of the unicausal failure, the multicausal failure becomes increasingly more probable.

Friday, January 31, 2014

Science or mysticism?

We are always criticising opinions and interpretations which lack the rigour of the scientific method, but how do we defend science to theordinary public? I remember reading in a book (don't remember which) the following critiscism: everyone of us believes in the first law of thermodynamics, because we were told it holds the most careful tests made until today, but only a handful scientists in the whole world are able to understand and interpret these tests. The majority of the population feels confortable with believing in science just because someone with a lab coat said it is science.