Sunday, December 28, 2014
The worst of peer reviewing
You submit a manuscript to a given magazine, sometimes you get a direct rejection (by the editor, using arguments like "this magazine receives too many manuscripts and we cannot publish all of them") or it enters in reviewing and after one month or so, you get a very nasty review, telling your work is nonsense (not to use another more colorful substantive). Then you decide you had been humiliated enough and send the same manuscript to another magazine, sometimes with a better impact factor and after some months more you receive a nice review, perhaps pointing out some mistakes you made, but finally approving the publication. What happened, wasn't the work nonsense?
Once in a joint dinner with some academic friends I placed the following question:
-"Did you ever approved a manuscript as you received?"
I meant, without asking some correction, some change, some new experiment. Of course everybody kept still.
The two problems are linked. In an ideal world, the peer who reviews your manuscript should be perfectly impartial and just. I try to be in the reviews I write (I'm not saying I achieve this goal). In fact, reviewers are humane and are subject to psychology as anyone else. The position of power over other authors sometimes is too strong to resist. The number of manuscripts increased considerably in the last 50 years and the same happened with the number of magazines. This means less prepared editors and reviewers.
What could be done? It is not possible to overload good reviewers more than what is done now, and reviewers should be trained as well. I read a proposal once in LinkedIn, the reviewers should receive a guideline, questions to be answered about the manuscript, to help performing the task. Other important thing would be that the editors take their roles more seriously. Bad reviews should not be considered in evaluating a manuscript. This means the editor should, at least, read the reviews before forwarding to the author.
Wednesday, October 8, 2014
The role of evaluation in superior education
The reader must be aware that in my country evaluation is practically a synonym of (written) tests and grades. We adopt in most of the education levels, a numerical system which runs from 0 to 10. At USP, the student must attain 5.0 in a discipline to be considered "approved" (otherwise he, or she, is considered "reproved" and has to attend again the lectures in another semester).
This value is accessed typically by written tests applied along the semester, which, at least in mathematics, physics and engineering, consist of a set of (traditionally 3 or 4) problems that the student has to solve, normally in a period of 100 minutes and without consultation of the colleagues, written material or personal notes. Each of these problems have a standard "value", the professor corrects the problem attributing this value or a partial one (if the student didn't succeed completely) and the "total value" of the test is the summation of the partial values of the problems. The final grade is calculated as some average of the individual tests. A student who didn't succeed attaining the 5.0 in a given discipline in most of the cases has to repeat the discipline, meaning, attending the lectures and the tests a second time, a third time, until he gets his 5.0.
I do not like this system. In my opinion it mostly accesses the student's capacity to remember something or his (or her) capacity to work under stress conditions. Of course, both things are worth accessing, but not every time. In some conditions, however, tests are unavoidable. We have disciplines with up to 70 student at USP. A personalized subjective evaluation is very difficult in this context.
The numerical system is pernicious. Its effect goes beyond the individual disciplines. For example, at USP we calculate two kinds of "average grades", called "clean" and "dirty". They are weighted averages, respectively, of the approved disciplines or of all disciplines the student attended (the weight factor is the number of credits, meaning lecture hours, the discipline has). This averages are used, for example, to select students for our international exchange programs.
I try to be fair in the tests I write. I believe my tests are so constructed that a student who reached the minimal goals of the discipline get a 5.0. I try to cover all the contents with my questions, but in many cases the student solves only part of the exercises, "aiming" at 5.0. Of course, I get disappointed and they usually fail. My tests are not made to be only partially solved. I detected, however, a problem in the other side of the spectrum: the excellent student. It is a hell of difficulty to reach 10.0 in a individual test, what to say about three or four tests? I can count on my fingers the number of students who got a 10 in my tests and more than 400 students passed through me in these 13 years.
One could ask why this system is so prevalent. My answer is: because it is convenient.
It is convenient to the professor, in two ways. First, it may be easy to correct. Particularly in the natural sciences the problems have a numerical value. If the student obtained that value, you could consider the problem correctly solved. It is also convenient due to the fact that correction is objective. Either the student succeeded or not. The professor does not need to deal with the (messy) task to subjectively accessing the student's performance. It is also convenient to the student, since he knows what to expect from the evaluation and he can be prepared for it. For example, the student memorizes everything the day before the test, so he can retain the subject in the memory.
I try to reduce the impact of the written tests, or even remove them completely, in my disciplines. In one case, which usually get more students per class I use the (balanced) test, but I grant up to 2.0 points which I call "participation note" which I subjectively evaluate based on several criteria, but, as I always tell my students, one of the criteria is whether I can remember the face of the student at the end of the semester. I will not show the hubris to assume I always give the fairest evaluation, but at least I can get typically one or two students each year (out of 40) with 10 grade.
In a second discipline, which receives typically a lower number of students each year (it is an "optional" discipline, meaning the student chose to be there, and was not forced) I evaluate the students with on line tests in which he (or she) can have access to written material and personal notes and by group activities, in which I evaluate the participation of the student, and not the learned content. I realized the content comes automatically with the group activities, so participating in it is enough to get the content.
Sometimes I also run into trouble with the bureaucracy due to this. Once I had to renew the accreditation of an advanced discipline in the post-graduation review board. I stated in the form that I was practicing "continued evaluation", meaning I apply no special evaluation method, I evaluate the students on site, in real time, during the lectures. They are highly motivated students, mostly experienced researchers or engineers which return to university to get a master or a doctoral degree, so I have no need to apply any evaluation method. It is amazing how much they learn after I tell them this. The review board returned my forms, saying this was forbidden, and that I had to introduce some formal evaluation method. I returned the form stating I would apply a test, I just didn't explain that my test starts in the first minute of the first lecture and ends in the last minute of the last lecture.
In order to finish, I would like to share with you what I learned with my disciplines:
- Written tests should be reduced to a minimum, or, if possible, removed at all. If they are unavoidable, they have to be fair according to the faculty rules.
- Qualitative evaluation should not be feared, the professor will be wrong from time to time, but in subjects like "the student shall be approved or not", he will probably be right all the time.
- One should not fight bureaucracy, if your faculty requires you give a numerical evaluation of your students, do it, but be sure you are truly evaluating the student's performance, according to the rules of the faculty (in particular, a student who learned the minimal contents in my faculty should be rewarded with a 5.0, a student who reached 100% of the content, and they do exist, should receive 10)
- A professor should not fear to be qualitative in the evaluation, one may not be fair all the time, but he will get it right in most of the times and the students will respond, learning what you have to teach and not looking only for the grades.
Friday, September 26, 2014
On the Bragg-Williams-Gorsky model
First of all, the reading of this work, plus a few other references, show that the current division of physics into experimentalists and theoreticians was non existent at that time, the works are mostly experimental works dealing with aspects of the ordering transformation like resistivity measurements, corrosion experiments, crystallographic investigations and even plastic deformation, with the authors developing some theoretical background in varying degrees. This proves a point I raised in a previous post in the blog, namely, that specialization in science is a modern (or even post-modern) phenomenon.
Bragg and Williams give credit to Tammann as the one who "first envisaged" the notion of ordered compounds in 1919. This is backed up by previous publications by other authors, quoted in the manuscript. It is amazing to observe the careful way the authors dealt with a notion which is so common today, namely of the ordering/disordering transformations, which, today is a matter of introductory textbooks.
The authors also acknowledge previous work by other research groups, which predated their work. According to them, they became aware of these theoretical treatments after the publication of a first work in the series, these were works by Borelius, Johanson and Linde, by Gorsky and by Dehlinger and Graf.
Reading these papers, it becomes clear that the works by Borelius, Johanson and Linde (published in 1928) and by Dehlinger and Graf contained only part of the ideas which compose what we know today as the Bragg-Williams-Gorsky model, but the same is not true for the work by Gorsky, published in 1928 (which, therefore, predates the work by Bragg and Williams in seven years!), which gives a full derivation of the model in terms of the laws of statistical mechanics (this author, however, uses the ordering energy as energetic parameter for the calculation). This was the erroneous notion I had. Someone, I don't remember who, told me once that Gorsky's contribution was minimal and limited to including the magnetic degrees of freedom in the model. This is wrong. Based on the description in the manuscript by Bragg and Williams, we may accept that the theory was derived in parallel by their group and by Gorsky in Leningrad, so, this author's contribution of the model is, at least, as important as the one of Bragg and Williams.
A further interesting note. Reading the work by Borelius, Johanson and Linde, I discovered that the authors discuss the famous formula:
Sunday, September 21, 2014
The role of the student in superior education
Saturday, August 2, 2014
The role of the professor in superior education
It is based on an event that took place during a lecture of the discipline PMT2406-Mechanics of Metallic Materials, which is optional for the 10th semester (last) of the Metallurgical and Materials Engineering undergraduate courses at the Escola Politecnica da USP.
I was talking about a quite specialized subject, Deformation microstructures in fatigue, where I finish the discussion with the results of H. L. Huang, published about 10 years ago (for example, here).
It is a complex subject and, to my best knowledge, it is ignored by the fatigue community. I always felt justified, since I believe these results are very important.
I was finishing the explanation, so I needed a closure (no pun intended), so I said:
-"Why are you learning this?"
The few students, who were watching my explanation with full attention, looked at me in disbelief. I continued:
-"You will most probably neither hear something about this anymore, nor you will use this knowledge in the professional life."
If the students thought I was going crazy before, now they were sure. Before they could complain, I finished:
-"We learned the arguments, the mental processes the author used to reach these results."
I was doing this in real time, I am not sure if the students perceived this, or if they believed I planned that all along.
I felt relieved, I finally understood that what we teach is how to think about the subject and not the subject itself (in many cases the subject is only the tool in the learning process).
In my previous post I concluded realizing this explained why our students become excellent professional, even with the pitiful didactic most professors possess (myself included).
Here I want to extend a bit more. In the ECF20 I recently attended, there was a special discussion about teaching fracture mechanics. I remember one of the participants told that, instead of teaching how to use fracture mechanics in projects, he discussed how to derive the HRR field equations. I am sure he knows that this knowledge is not "useful" for the engineer, but he is interested in forming thinking engineers rather than automats.
For me these realizations are important, I finally got in peace with my teaching.
Saturday, July 5, 2014
Report from ECF20
The European Conference on Fracture is a large scale event, dedicated to fracture mechanisms, fracture mechanics and fatigue. It congregates famous names in the field and, in spite of the multiple parallel sections, presents also plenary lectures, which are attended by the whole comunity. I enjoy very much the ECF and recommend to everyone who investigates these themes. Next installment will take place in Catania, Italy in 2016 and then in 2018 in Belgrade, Serbia.
Friday, May 2, 2014
Superespecialization is a (post)modern disease
The colleagues in Pedagogy usually blame (at least here in Brazil) the Napoleonic university reform for this state of affairs and state that it is post-modernism that is changing this.
I always doubted this. If we go back to the XIX century and see the biography of people like Euler, Rankine, Faraday, Darwin, we see that they were far from being specialists in just one subject. They were all polymaths.
I read recently an article about Heisenberg and discovered, not as far as the early XXth century, that he had a hard time getting his title (Dr rer nat) because he had problems with experimental physics. So he was expected to understand things other than the subject of his thesis to obtain the title.
I guess the superespecialization is a recent phenomenon. If you are a university professor like me and does not wish ending up as a "Fachidiot" here is my advice: try teaching something outside your area of expertise. You will find out it is funny to learn something new and that, with your intelligence, you will discover connections with yout own research field, perhaps even something innovative.
Sunday, April 20, 2014
The Battle Between Zirconium Alloys and Stainless Steel as Cladding of Nuclear Reactor Components: Part One.
Saturday, April 19, 2014
Inertia: the hidden field
I was travelling in a bus on the streets of São Paulo, a living laboratory of inertial forces, and I was wondering about the weirdest of all Newton's laws: the first.
Imagine the following situation: a closed room with a subject inside (let us call him the Schrödinger's cat for simplicity's sake). Without his knowledge this room is in fact a vehicle, which is perfectly insulated from vibration and all forms of sound. The room starts moving and begins a curve.
The cat, inside the room, will sense mysterious force field acting over his body.
The strange nature of inertia arises, in my opinion, that this "field" has no source. Gravity is generated by mass, the Coulomb field is generated by charge, even the nuclear strong qnd weak forces have their own souces. Inertia doesn't. However, this becomes ever weirder.
Back in college I wrote my first paper (for the Physics Lab., equivalent to Experimental Physics 101) on the subject of the identity between inertial and gravitational mass.
Inertial mass is the ratio berween a force acting over a body and the acceleration it produces, which can be precisely determined in an elastic colision experiment. Gravitational mass is the ratio between gravitational force and gravity acceleration, which is related to the spacetime curvature, which can be precisely determined in a free fall experiments. There is no sensate reason to assume that both things are the same.
I remember when I wrote that paper (30 years ago) someone (I believe it was Einstein, but I am not sure) tried to justify this equality using Mach's principle, which, as far as I could understand, meant that inertia is created by the gravitational attraction of all mass im the Universe (the reference frame of the fixed stars).The problem is, after relativity we know that the fixed stars are not even there where we see them. I see another problem with this idea: there is no time delay between the change in movement and the onset of inercial forces. If inertia originates from the interaction between the body and faraway masses, shouldn't this imteraction obey the restriction of the light speed?
I know I'm writing about quite specialized things I didn't study, and worse, writing with the memory of a 1st year physics bachelor student of 30 years ago, but anyway I believe there is something very fundamental about Nature hidden in these weird properties of inertia.
Tuesday, April 8, 2014
Public opinion polls
A recently published poll gained considerable attention in Brazil. Two different polls organized by the same institute (IBOPE, short for "Brazilian Institute for Public Opinion Research") gave different results concerning the same population, in one of the polls Pres. Dilma would have about 43% of vote intentions, while in the second this number would be 38%.
There was a fuzz in the social networks about this result, first because the News Services (which are in the majority supporting the opposition) published headlines like "President Dilma fell in the IBOPE poll!". There was an outcry from Pres. Dilma's supporters because they insinuated political use of the result (which is obvious), because the former poll range extended for a longer period and ended after the second poll (which gained full coverage from the press).
Apart from the political use of such results, one should look at the problem with a scientific point of view. What is a public opinion poll?
The answer to this question is: a statistical inference measure.
Let us consider what is in fact a poll. This process probes one population (the country voters) extracting a small sample and probes one question. Let us remain simple, let us suppose the question is binary, having only two possible outcomes. As every math, physics, engineering student learns, this problem is equivalent to probing a box containing a large number of pebbles colored black and white. Supposing the fraction of white peebles is p, that the sample size is N, the number n of white pebbles in the sample will be given by the Binomial probability distribution:
Sunday, February 9, 2014
Multicausal failures
We all are aware of failures that are caused by a single event. From the point o view of engineering these are lucky cases, since controlling these events, the failure can be prevented. Engineering project usually assumes this hypothesis for this purpose. For example, the maximum stress level of a structure is calculated based on the yield or fracture stress of the individual components, some parts in airplanes are designed such that the cyclic stress intensity factor does not exceed the fatigue threshold level measured in a Paris plot.
There are, however, cases in which the failure is caused by multiple critical events happening in series or in parallel simmutaneously or not. The most famous example was the fire in the Kiss club in Santa Maria-RS, Brazil, last year. The causes ranged from corrupt city officials and firemen, who alloed the place to open without minimal safety conditions, greed by the owners, which led to bad material's selection of the foam used for acoustic insulation,which produced HCN when burnt, and the stupidity of the band members, who lit inappropriate fireworks in a closed space. Any of these events, if they were avoided, would prevent the tragedy too.This went surely through the mind of all people involved, but they surely decided that the probability of everything going wrong in the right sequence in the right time was too low to consider. The tragedy is there to prove they were wrong.
In fracture as Prof. Bažant teaches, this leads to two different failure propability distributions. In case of a single critical event (as in cleavage), the probability is described by the Weibull distribution. In the case the failure is a consequence of infinite individual critical events, like ductile fracture by microvoid coalescence, this leads to the gaussian distribution. There are also intermediate cases. The point here is to remind that multicausal failures do exist. They require nonlinear thinking by the engineer, who is forced to consider not only what could go wrong, but also in which sequence and in which time.
Worse, as I repeat to exhaustion to my students, when you decrese the probability of the unicausal failure, the multicausal failure becomes increasingly more probable.
Friday, January 31, 2014
Science or mysticism?
We are always criticising opinions and interpretations which lack the rigour of the scientific method, but how do we defend science to theordinary public? I remember reading in a book (don't remember which) the following critiscism: everyone of us believes in the first law of thermodynamics, because we were told it holds the most careful tests made until today, but only a handful scientists in the whole world are able to understand and interpret these tests. The majority of the population feels confortable with believing in science just because someone with a lab coat said it is science.