tag:blogger.com,1999:blog-3522311.post5165254507608615929..comments2024-03-12T03:23:42.976-04:00Comments on NeuroDojo: Academic astrologyZen Faulkeshttp://www.blogger.com/profile/07811309183398223358noreply@blogger.comBlogger7125tag:blogger.com,1999:blog-3522311.post-72005559381294115202012-10-01T19:18:26.975-04:002012-10-01T19:18:26.975-04:00making sure I check the box to email follow up com...making sure I check the box to email follow up comments...neuromusichttps://www.blogger.com/profile/02897252203538237342noreply@blogger.comtag:blogger.com,1999:blog-3522311.post-74642686208257113382012-09-30T23:58:45.303-04:002012-09-30T23:58:45.303-04:00@klab - no, I understand that you've now added...@klab - no, I understand that you've now added the "note", which is commendable. but it should not be that hard to enforce the constraint in the code itself (e.g. don't let people set parameters that are outside of the scope of the predictor... throw up a warning if people try to go outside of the 5-12 range and clear the graph)neuromusichttps://www.blogger.com/profile/02897252203538237342noreply@blogger.comtag:blogger.com,1999:blog-3522311.post-68453827386201350252012-09-24T19:16:46.691-04:002012-09-24T19:16:46.691-04:00@neuromusic. Trying to replicate the issue.
The h-...@neuromusic. Trying to replicate the issue.<br />The h-index calculator webpage says:<br />" Note: The equations and the calculator model people that are in Neurotree, have an h-index 5 or more, and are between 5 to 12 years after publishing first article."<br />but you mentioned that when you call it it says nothing.<br /><br />Are people sharing a direct link to the java code that cuts away the surrounding text? We can try and prevent that if this is happening. Would adding a desription of the size of the error bar help? - just like the rest of the description it is reported in the paper. klabhttps://www.blogger.com/profile/15789894736083423568noreply@blogger.comtag:blogger.com,1999:blog-3522311.post-12056914947997227932012-09-21T17:05:17.784-04:002012-09-21T17:05:17.784-04:00My issues have much less to do with the paper and ...My issues have much less to do with the paper and the predictive model that the authors (klab et al?) propose and everything to do with the silly widget.<br /><br />The predictive model they employ only works under certain constraints... they only analyzed authors that had their first paper at least 5 years ago, authors with an H-index of >4, and authors who were already listed in Neurotree. And under these constraints, it does pretty well. And I can respect that & I can respect the goal of developing a compound prediction. And I think that the insights gained from the way the model weights different inputs is really interesting.<br /><br />But then to promote their work they also publish a little widget titled "Predicting scientific success: your future h-index" which will let ANY SCIENTIST OR CAT (not just neuroscientists) enter ANY VALUES THEY WANT (unconstrained by their model's assumptions) and it will spit out an estimate of their future h-index WITHOUT ANY ERROR BARS. <br /><br />This isn't "statistics"... its a horoscope.<br /><br />Above, klab defends the model saying "Prediction is a statistical technique, with well quantifiable precision" and yet they do not offer this quantification in their widget. So, yes, though the widget's predictions might be statistically accurate under certain constraints, it is a horoscope because we are given no indication of the confidence or accuracy of the individual predictions.<br /><br />neuromusichttps://www.blogger.com/profile/02897252203538237342noreply@blogger.comtag:blogger.com,1999:blog-3522311.post-43489489116677425402012-09-21T15:58:30.391-04:002012-09-21T15:58:30.391-04:00I love the cat example and your lolcat is a great ...I love the cat example and your lolcat is a great addition! And it so nicely pointed out that the scope of the calculator (starting h>5, 5-12 years since first paper) needs to be started in the calculator. <br /><br />I think predictions can be interesting if they are non-obvious. In our simple 5 factor predictor, I thought that the # of distinct journals was interesting - I have never seen a search committee mention this as a major positive factor, and yet it somehow makes sense that people who invest in breadth ultimately reap rewards (as measured by a higher h-index later in life). There must be some features that good scientists share! Which factors do you think predict actually relevant research?<br /><br />But beyond interesting, predictions can be useful if they are good enough, which is actually an empirical question. Do universities that use metrics (along with committees) for evaluations show signs of more important science over the long term? Would you presume that its always a mistake? What if you are a dean and you do not trust your committee? Or you can not afford one?<br /><br />But I want to end with a positive example of (technically not overly interesting) predictions. Take google as an example. It predicts which pages I actually am looking for. Is it perfect? No its not. But I use it. Because it beats reading the entire internet.<br /><br />klabhttps://www.blogger.com/profile/15789894736083423568noreply@blogger.comtag:blogger.com,1999:blog-3522311.post-92137867275427625922012-09-21T11:29:27.175-04:002012-09-21T11:29:27.175-04:00klab: Yes, I understand that predictions can be ma...klab: Yes, I understand that predictions can be made statistically, with some degree of confidence.<br /><br />But the limits of are made clear by the "Justin’s cat" example. If a prediction reaches an obviously illogical conclusion, I have to question its value in helping people make serious decisions.<br /><br />There's also the question of whether a prediction is interesting. Does it generate new questions and insights? If 87% of the h-index variation is captured by number of publications alone, the conclusion - "publish a lot of papers" - is not exactly a bolt from the blue.<br /><br />Thanks for the comment!Zen Faulkeshttps://www.blogger.com/profile/07811309183398223358noreply@blogger.comtag:blogger.com,1999:blog-3522311.post-784245233165687602012-09-21T10:56:11.435-04:002012-09-21T10:56:11.435-04:00"First, I put no more faith in this predictor..."First, I put no more faith in this predictor than I would in astrology. The h-index predictor ought to come with the sort of disclaimer that skeptics asked newspapers to add to horoscopes: “purely for entertainment purposes.” <br /><br />I am not sure I do understand your statement. Prediction is a statistical technique, with well quantifiable precision and has nothing to do with faith. <br /><br />In practice the h-index is used for prediction by many entities, implicitly assuming that "the h-index predicts future scientific impact". In fact Hirsch argued and showed statistically that in some domains the h-index is predictive (http://arxiv.org/abs/0708.0646). The compound score we introduced is more precise and arguably more fair. After all it gives higher prediction scores to people for publishing in competitive journals, switching fields at some level, and gives old people less of an advantage. <br /><br />Are you simply making the point that metrics never capture all aspects of a scientists work? That we should not rely exclusively on metrics when deciding hiring, or funding? That metrics can be gamed? I guess we all agree on these points. klabhttps://www.blogger.com/profile/15789894736083423568noreply@blogger.com