Jump to content
UltravioletPhotography

"Format equivalence" and all that


enricosavazzi

Recommended Posts

lukaszgryglicki

I should have added: I would like 6x7 sensor with just 25 mpix that costs less than a new mid-class car.

:D

Link to comment
enricosavazzi
On 3/1/2024 at 5:03 AM, Andrea B. said:

...that case of the person who re-invented integrals a bunch of years ago

 

OMG!! That is totally hilarious. 

The fact that Tai's paper passed "peer review" certainly tells you something about those "peers". 

geez!! unbelievable!!

Not only that, but the author unmodestly named the formula and method after herself in the same paper. The normal practice is just to describe "a new method for..." and leave it up to posterity to decide whether to name the method after its discoverer/inventor. This was another important failure in the review process of this paper (or perhaps the author purposefully changed this part of the text after the review - this is not unheard of).

 

Going back to one of the original themes of this thread (scientific obscurantism), giving/sending the MS of a paper to a well known, and well known to be very busy, scientist to review, and subsequently citing this scientist prominently in the "Acknowledgements" section of the paper, is one of the common recommendations to make it look like the paper has been thoroughly reviewed. The MS itself probably became immediately buried under several layers of heaped correspondence on a coffee table in the scientist's office, and saw the light again only during one of the subsequent spring cleanings.

Link to comment
Andrea B.

I'll add that neither journals nor reviewing are as rigorous as they were in earlier years. Not good.

Link to comment
Lou Jost
2 hours ago, Andrea B. said:

I'll add that neither journals nor reviewing are as rigorous as they were in earlier years. Not good.

The system has been corrupted by an explosion of predatory and vanity journals that make their money by charging authors to publish garbage, so that the authors can get publications to pad their resumes. Hiring and tenure decisions are often based largely on number of publications. This is beginning to change though, because now tools like Google Scholar ad Elsevier's "Top 2%"  analyze the number of times each article is cited, and discount self-citations, generating more sophisticated and accurate summary measures of the real quality of a scientist's work.

Link to comment
Andrea B.

That is good to hear, Lou. Lots of us are upset these days by the amount of non-reproducible and/or non-replicable results out there in journal land. Especially in medical areas where statistical methods are often misapplied and data is not supplied and methodology is not described. 

 

I can only speak to mathematics*, but most of us still think that it is one area for which is quite difficult to write junk papers and get them into a reputable journal. That is not to say that math papers in reputable journals cannot have occasional errors. That does happen in a long complex proof. Not too often though.

 

I was browsing the other day and found several "published" proofs of the Riemann Hypothesis. Amazing.

 

 

 

*...and it has been a very long time since I had anything going on in mathematics. Very, very long time. 

Link to comment
Andy Perrin

Even in more mathematical fields like oceanography, there is a lot bad statistical reasoning floating around. I have a current grad student who is looking at two time series, one for sea surface temperatures and the other for chlorophyll levels. Both vary (more or less) periodically over a period of a couple of years. She wanted to look at all the places where they were in phase with each other and make some kind of causal argument that rising temperature could trigger a rising chlorophyll level based only on the two time series.
 

I had to explain that ANY two periodic variables will go in and out of phase at the difference of their frequencies (this is called “beats”), regardless of whether one caused the other or if they both existed in isolation. Just because two variables are going up at the same time doesn’t mean they are related. 
 

And on top of all this, there was only around 25 years worth of data to work with, which corresponded to about 12 cycles’ worth. So it was a tiny sample size as well. (To be fair, she at least knew that was an issue without my having to point it out.) 

 

I’m not sure if this was just this particular student and her advisor, or what, but I have worked with other oceanographers and meteorologists, and they do seem to like to apply heavy-duty statistics to really small amounts of data, which is a wonderful recipe for seeing what you want to see. 
 

I know for a fact that this student is totally sincere in wanting to understand her data, so there is no deliberate “obscurantism” going on, but plain old wishful thinking and applying sophisticated methods on inadequate data can be just as harmful. 

Link to comment
Lou Jost

Andy, I think biology in general has a pervasive problem with mathematical topics, even at the most basic and fundamental theoretical levels. I've spent the last years trying to fix some of these issues, especially the mis-quantification of biodiversity in ecology, and  mis-quantification of genetic differentiation in population genetics.

Link to comment
Andy Perrin

Biology does for sure, but I was talking about oceanography and meteorology! Those are physical sciences which you would hope would fare better. 

Link to comment

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...