Discussion
Experimental History
bjackman: I have had so many "why don't you just" conversations with academics about this. I know the "why don't you just" guy is such an annoying person to talk to, but I still don't really understand why they don't just.This article pointed to a few cases where people tried to do the thing, i.e. the pledge taken by individual researchers, and the requirements placed by certain funding channels, and those sound like a solid attempt to do the thing. This shows that people care and are somewhat willing to organise about it.But the thing I don't understand is why this can't happen at the department level? If you're an influential figure at a top-5 department in your field, you're friends with your counterparts at the other 4. You see them in person every year. You all hate $journal. Why don't you club together and say "why don't we all have moratorium on publishing in $journal for our departments?"No temptation for individual research groups to violate the pledge. No dependence on individual funding channels to influence the policy. Just, suddenly, $journal isn't the top publication in that field any more?I'm sure there are lots of varied reasons why this is difficult but fundamentally it seems like the obvious approach?
glitcher: I like the author's idea:> So the solution here is straightforward: every government grant should stipulate that the research it supports can’t be published in a for-profit journal. That’s it! If the public paid for it, it shouldn’t be paywalled.The article then acknowledges this isn't a magic solution to all the problems discussed, but it's so simple and makes so much sense as a first step.I'm no expert here and there are probably unintended consequences or other ways to game that system for profit, but even if so wouldn't that still be a better starting point?
MarkusQ: Part of the problem is we got tricked into thinking "peer reviewed" meant "true," or at least something like it.It doesn't. Not even close.Peer review doesn't even mean that it's free from errors, free from fraud, free from methodological mischief; it doesn't mean anything at this point. Yet we continue to act like it does.Darwin's work wasn't peer reviewed. Nor Einstein's. It's something we cooked up in the mid 1900's to deal with the fallout from another mistake ("publish or perish") that meant people had to try to publish even if they had nothing to say.
glitchc: We already have open-access publications: Just put it on arXiV. Most researchers I work with do this already.The problem isn't access, it's citations. arXiV is not considered a credible citation source since anyone can publish anything. TPCs don't use it in their list of citations, neither do grant funding agencies or government institutions.The current academic enterprise relies heavily on third-party gatekeeping. We rely on others to do the vetting for us. The first thing an academic does is check where a paper is published, before even reading it. It's a crutch.Any gatekeeper will naturally tend towards charging for access over time: It's a captive market, the economics demands it. Unless we eliminate that dependency, we cannot change the system.
scottndecker: Based on title, I was figured it was referring to USA moving to the metric system
bjackman: I think that's also a good proposal, and I don't think it conflicts with the "prestigious departments stop publishing in $journal" idea at all. Probably we want both.Only difference is that the author is writing for a wide audience and his best angle to change the world is probably to influence the thinking of future policymakers. While I am just an annoying "why don't you just" guy, my "audience" is just the friends I happen to have in prestigious research groups.Adam M also probably has lots of friends in prestigious research groups (IIUC although he complains a lot about academia he was quite successful within it, at least on its own terms). And the fact that he instead chooses to advocate government policy changes instead of what I'm proposing, is probably a good indication that he knows something I don't about the motivatioms of influential academics.
snowwrestler: > Part of the problem is we got tricked into thinking "peer reviewed" meant "true," or at least something like it.No actual working scientist thinks this.“Glitchc” has it right elsewhere in this thread: the motivating force behind journals is prominence and reputation, not truth.
D-Machine: Ah, but the naive public still broadly believes in peer review, and that high profile journals do good review. And the prominence and reputation that comes from these journals arguably then relies on this (increasingly false) public perception.Would scientists feel the same if the public was more educated about how bad journals and peer review are? Not so easy to disentangle IMO.
bad_haircut72: Unfortunately I think charging money is a necessary signal that this particular gatekeeper is doing a good job. We should recognise that money is a necessary part of this process, else there is no gate to keep. But we shpuld reverse the economics by having people pay to get their stuff peer reviewed. Imagine if reviewing research papers was something you could get paid to do, the incentive then isnt to rubber stamp things, actually your rating as a reviewer would come down to quality of reviews
harshreality: Journals are not about providing access to science, much less public access.Journals are an academic-career-advancement service. It therefore makes sense that they do not pay academics. You don't pay your customers.That means they need to generate customers elsewhere. Once they've established a reputation, their policies and paywalls and fees are the result of trying to signal exclusivity and set an optimum market price.Until the supply side of the research market largely agrees on a way to use open-access repositories like arXiv as a primary career-advancement signal, complaining about closed-access journals is tilting at windmills.Changing the law to prevent journals from being able to copyright anything could potentially force the research industry to rapidly develop a new solution, but at the cost of short-term chaos and career instability for new academics.
orzig: Acknowledging, I am not a expert in this stuff, here is an idea: getting momentum for these sorts of things is so important, what is the journal that would be easiest to make a big example of, so that everyone understands that it is possible? Just completely mercilessly drive them out of business, and then hound their executives when they try to get other jobs. It appeals to peoples base instincts, but the last 10 years have shown those are pretty powerful. Then the movement which has formed around that can take down progressively bigger journals. Probably want a different organization building the alternative; the people with the personality to fight at the Vanguard of the revolution don’t tend to be great at building in the long-term.
amluto: > I think charging money is a necessary signal that this particular gatekeeper is doing a good job.I’ve never seen the slightest relationship between the charge to read a paper and the quality of review.
D-Machine: Because there isn't such a relation. It's a thing people believe when they don't have actual experience with peer review. If anything, predatory journals and low-quality pubs can charge more, since publication is more guaranteed.
bonsai_spool: > Darwin's work wasn't peer reviewed. Nor Einstein'sExcept it was…? This is absurdly ahistorical and the fact that you cross disciplines in trying to make an incorrect argument questions whether you are in science at all.The structure of peer review in Darwin’s time was different, where experts wrote monographs and gave lectures at symposia that then led to letters among their peers. Which is what happens now, if you take a step back.The volume of new work these days is incompatible with the older informal system, and is in some ways our new paradigm is superior as there is a formal period in which new works are reviewed.
MarkusQ: Sorry. I meant "peer reviewed prior to publication" as the phrase is presently used. I thought that was obvious.What you're calling "peer review" is what I would call "discussed" or "debated" which it certainly was.I dispute your claim that the new paradigm is superior.
D-Machine: The other factor preventing a fix is that people with no actual serious experience of academic publishing and peer review will defend these journals, because they still think that peer review acts like some kind of meaningful quality filter. But, it really doesn't.Because someone is surely going to try to defend journals via peer review in this thread, I want to provide a counter to the arguments that journal peer review does much good. Also, since everyone knows that if you just go to a poor enough journal, you can be published, I am going to focus on the (IMO mostly false) claim that higher-profile journals are still doing a good thing here.There are numerous studies showing that higher-profile journals in general have more retractions and research misconduct [1-2], lower research quality [3], in fact weaker statistical power and reliability [4], and that statistical reliability even in high prestige journals is still extremely poor overall [5]. Also, making it through peer review is highly random and dependent on who you get as a reviewer [6], or is just basically a coin toss even when looking at reviewer groups: In 2014, 49.5% of the papers accepted by the first committee were rejected by the second (with a fairly wide confidence interval as the experiment included only 116 papers). This year, this number was 50.6%. We can also look at the probability that a randomly chosen rejected paper would have been accepted if it were re-reviewed. This number was 14.9% this year, compared to 17.5% in 2014. [7] We should just move to ArXiV-like approaches and allow the scientific community to broadly judge relevance and quality. Journals just slow things down and burn funding for very little gain or benefit to anyone other than the journal owners.[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC3187237/[2] https://www.pnas.org/doi/10.1073/pnas.1212247109[3] https://pmc.ncbi.nlm.nih.gov/articles/PMC9382220/[4] https://journals.plos.org/plosbiology/article?id=10.1371%2Fj...[5] https://www.frontiersin.org/journals/human-neuroscience/arti...[6] https://journals.plos.org/plosone/article?id=10.1371%2Fjourn...[7] https://blog.neurips.cc/2021/12/08/the-neurips-2021-consiste...
vladms: > higher-profile journals in general have more retractions and research misconduct [1-2]Given that reviews are not a mechanism to check for truth but soundness, the higher profile the thing I would imagine there would be more misconduct. I mean would one risk prison to steal 10$ or to steal 1 million $?> lower research quality [3]To cite exactly from your link "the evidence is mixed about whether they are strongly correlated with indicators of research quality.". I find saying "lower" a bit too strong given the original quote.> in fact weaker statistical power and reliability [4]For a specific field "cognitive neuroscience and psychology papers published recently"!> statistical reliability even in high prestige journals is still extremely poor overall [5]According to https://www.frontiersin.org/journals/human-neuroscience/arti... they kind of targeted bio/medical/psychology field for the analysis. Which seems to me very focused to be able to draw general conclusions.> Also, making it through peer review is highly random and dependent on who you get as a reviewer [6], or is just basically a coin toss even when looking at reviewer groups:It's a coin toss if paper could get accepted at all, and that's less than ideal but what the system should do (at least) is reject obvious crap, not ensure that something gets clearly accepted. The danger is False Positive (accepted even if it's crap) rather than False Negative (rejected even if it might be something useful).Overall note: the review system is not ideal and should be improved. But it's a hard, complex and delicate problem.
bonsai_spool: I am sympathetic to the argument you wish to make, that peer review is no panacea, but the actual evidence you offer has nothing to do with this claim.You are trying to say that high profile journals have more retractions, which is well known as you share.How does that have anything to do with peer review? Are you saying that there is more review or less review in some cases and that influences retraction rate? In what evidence? In what world does the arxiv system moderate this discrepancy?
D-Machine: > How does that have anything to do with peer review?I already addressed this. People know peer review can be bad, but some think "good journals" still do good peer review. This is not so clear.> In what world does the arxiv system moderate this discrepancy?Open systems allow the scientific community to figure out ways to properly assess research quality and value more cheaply, and without passing through (often arbitrary and random) small numbers of gatekeepers that don't even do a reliable or good job gatekeeping in the first place.
bonsai_spool: Your argument depends on worse peer review at top journals - but fundamentally, you fail to show how doing any peer review is strictly worse than doing no peer review.I understand that we want arxiv to exist, and it does, and it’s growing. That doesn’t mean we don’t want Nature or Science to triage the most compelling stories.Importantly, we can already begin the search for these ‘cheaper’ review strategies while not losing the helpful information filter we get by seeing where things are presented/published
D-Machine: > Your argument depends on worse peer review at top journals - but fundamentally, you fail to show how doing any peer review is strictly worse than doing no peer review.No, it doesn't. The argument is that peer review is incompetent gatekeeping in general, and so slows things down and makes thing expensive. Also, I am countering the argument "we need journals because journals do peer review" by arguing "peer review by journals isn't clearly actually good", I am not saying "peer review in general is unneeded", as I support review by the entire scientific community, rather than journal gatekeepers.> you fail to show how doing any peer review is strictly worse than doing no peer reviewI wasn't trying to show that. I have provided plenty of arguments to show why killing journal-based peer review could definitely speed things up and so potentially make things better. I want actual organic review by the community, not by tiny groups of gatekeepers.
snowwrestler: I don’t understand why people care so much about the cost of journal subscriptions. If we add up all the revenue from all major scientific journal publishers, is that a big number in the context of the national economy? Or even compared to one major tech company?I feel like this is one of those classic local minima where a community starving for resources fights vociferously amongst itself because they have internalized that they can’t win externally. From where I sit outside academia the problem with science seems obvious: there is not nearly enough money going into it.I doubt bringing the heads of for-profit journals would change that under current national conditions in the U.S.
alexwebb2: > Robert Maxwell, one of the architects of the for-profit scientific publishing scheme. When he later went into debt, he plundered hundreds of millions of pounds from his employees’ pension funds. You may be familiar with his daughter and lieutenant Ghislaine Maxwell, who went on to have a successful career in child trafficking.Wow! Surprised that hasn't been mentioned here already. Jumped out to me immediately as a morbidly curious bit of trivia.
alpaca128: That reminds me of the recent term "Epstein class". It seems more and more fitting.
alansaber: Like some other posters here I think that a paid service is probably a necessary evil for long term quality regulation (although currently it skews too much into evil)
fireflash38: It's a reputation economy. Like review sites. They start off truthful, and then as time goes on incentives shift to bad actors to subvert it. Or they just sell out their reputation.Yelp, TripAdvisor, wire cutter, hell even Google results themselves.Once you start poisoning that well, it's difficult if not impossible to claw it back.
snowwrestler: Imagine being a scientist and reading “if you take this grant, you cannot publish your results in any of the most prominent journals in your field.” Sounds good?
bjackman: But IIUC there are entire fields where basically the whole US ecosystem is funded by federal grants. So if this policy gets enacted those journals are no longer prominent.(Maybe you'd need an exception for fields where the centre of mass for funding is well outside of the US, though).
D-Machine: > The first thing an academic does is check where a paper is published, before even reading it. It's a crutchIMO, academics that do this are not very competent, because we have plenty of research suggesting that higher-profile journals are in fact less trustworthy in many ways, or that there is no correlation at all between reputation and quality (see my other post here in this thread).Yes, some trash journals publish all trash, but, beyond that, competent researchers scan the abstract, look at sample sizes and basic stats, and if those check out, you skip to the methods and look for red flags there. Also, most early publications will be on an arXiv-like place anyway so you can't look to reputation yet.Likewise, serious analytic reviews like meta-analyses don't factor in e.g. impact factor or paper citations, since that would be nonsense. They focus on methodology and stats.I really think we ought to shame academics that are filtering papers based on journal alone, it is almost always the wrong way to make a quick judgement.
blululu: I have seen more than one PI at an R1 universities with multiple Nature publications use this heuristic. I would not call them incompetent.
D-Machine: Do you not notice the circularity of your reasoning here?Also I didn't say incompetent, I said "not very". More competent researchers make journal rep only a very small factor, and it is not via the "high rep = more trustworthy" direction (which is the bad heuristic), it is "pay-to-publish journals = not trustworthy".Once you have ruled out a publication being in a trash journal, reputation is only a very minor factor in consideration, and methodological and substantive issues are what matter.
engineer_22: We have a gatekeeper already in the funding source - they do the work of vetting researchers prior to funding the work.Piggy back this system so that the funding source publishes the papers itself, and researchers can only publish their papers that are directly funded.This system requires the cooperation of an organization to build the publishing infrastructure, but this could be a lowest capable bidder, and less drag on the system overall.
Atlas667: Everybody hates capitalism but most don't even know what capitalism means and how it expresses itself.The only people who like it are either capitalists or are paid enough to keep quiet about its ills or even pervert its ills into benefits.
kkfx: Science has never been free, and it isn't mostly progressive; like the bulk of the population, it is hyper-conservative without admitting it. So, the first flaw lies in the very social structure of those who practice science.The second problem, however, is a modern one: the pure, naked, and raw commercialization of science through "publish or perish", whereby the researcher is a Ford-style assembly line worker to be managed and who must be replaceable.Without a MENTAL paradigm shift, even before a material one, we will only be able to plug small leaks on a ship with a torn hull.
tokai: Just putting it on arXiv does not automatically make it OA. It needs a permissive license.
D-Machine: I think people in this post are using arXiv as sort of metonymy / stand-in for OA here, but, yes.
bsoles: > So the solution here is straightforward: every government grant should ...People who write such sentences have no idea what they are talking about or are being intentionally naive for whatever reason.Just because your one-sentence solution reads simple doesn't make the actual solution simple. Because such a solution involves changes to laws, changes to entrenched interests, changes to distribution of money involved in the whole system, and changes to balance of powers between stakeholders. Unless the push for such changes is significant enough to overcome the current state of affairs (due to public opinion, redistribution of power or money, etc.), nothing will happen.
dlisboa: A solution to a problem that doesn't change the current state of affairs, which by your definition makes it a simple solution, is not an actual solution.There are plenty of simple solutions to real problems whose only blocker is upsetting the status quo. "We have no housing...let's build more housing" is, in fact, a very simple solution. That it doesn't happen has nothing to do with it the solution itself.
light_hue_1: This defeatist attitude is why we can't have nice things anymore.Fun fact, all of those things happened and this is already government policy for any NSF grant: https://www.nsf.gov/policies/document/faq-public-accessSo maybe consider that when you give up on obvious things that are good based on some conspiracy theory that the "man" is trying to keep you down, what you're actually doing is being part of the system and endorsing it. Changes like this do happen, they just happen despite you.
butILoveLife: >The first thing an academic does is check where a paper is published, before even reading it. It's a crutch.This is actually what ruined my respect for Academia.My Science PhD buddy looked at the journal title and the claim, then said: "Its true!"I look at him with horror. Who cares about the journal, I want to know data and methodology.I've basically never forgiven Academia since this. I see even Ivys put out bad research and journals will publish bad research (Replication crisis and the ivy fake psychology studies)For outsiders, there is a prestige to being a PhD or working as a professor. Now that I'm mid career and lived through the previous events I mentioned + seeing who stuck with academia... These are your C grade performers. They didnt get hired by industry, so they stayed in school. They are so protective of their artificial rank because they cannot compete in Industry. Its like being the cool person on the tennis team. They are locally cool, but not globally cool.
beambot: > This is actually what ruined my respect for Academia.Spoken like someone who never went through grad school at a competitive R1 programIt was already a grueling 60-80 hour grind every week with frequent all nighters, high-pressure deadlines, absolute minimal pay, thankless duties, and plenty of politics. It's about the same for professors too.We already paid our dues by helping peer review (for free) a half dozen papers for each one we submitted. Why should we be expected to review random papers on arxiv too...?
phil21: I don’t think folks in academia have come to terms with how much the above attitude has completely and nearly entirely undermined the credibility of the entire scientific and academic community in the eyes of the general public.You don’t need a degree to understand how much utter junk science is being published by those who think they are superior to you. Just read a few actual papers end to end and look at the data vs conclusions and it becomes totally obvious very rapidly that you cannot “trust the science” since it’s rarely actual science being done any longer.The academic community has utterly failed at understanding they needed to cull this behavior early and mercilessly. They did not, and it will be generations at best to rebuild the trust they once had. If they ever figure out they need to.Things are going to get much worse before they get better. You can’t take any published paper at face value any longer without going direct to primary sources and bouncing it off an expert in the space you still trust to give you the actual truth.
D-Machine: I fear you are right here, and that the problem is far more dire than much of academia realizes. I know enough highly intelligent people (some even with family / spouses in academia, surprisingly) that are otherwise very e.g. left / liberal and open, that are still basically saying academia needs to be gutted / burned down.
0xbadcafebee: [delayed]
snowwrestler: The naive public does not believe anything in particular about peer review. They think new scientific results are significant when they read about them in the popular media, that’s it.People who do need to work professionally with peer review, do understand what it actually does and its limitations.You seem stuck somewhere in the middle, caring deeply about a system you don’t seem to fully understand.
D-Machine: > The naive public does not believe anything in particular about peer reviewYou'd need to provide evidence or an argument for this. The media reports on things in part based on journal prestige, and likely when questioned, people will say they can trust such things because good scientists have looked at the work and say it is good. This would be an implicit belief that peer review is generally working well, even if they don't use the term "peer review".> You seem stuck somewhere in the middle, caring deeply about a system you don’t seem to fully understand.Extremely presumptuous, as I work in this system, and have provided plenty of evidence for my claims. You've provided only sneers.
snowwrestler: You've provided evidence that prominent journals experience retractions, fraudulent results, etc. All true. But it is not the job of peer reviewers to decide what gets published.You've provided evidence that peer-reviewed science often turns out to be incomplete, inaccurate, wrong, fraudulent etc. All true. But it is not the job of peer reviewers to assure completeness, accuracy, or freedom from fraud.A peer reviewer reads a paper and make comments on it. That's it! They don't check primary data, they don't investigate methods, they don't interrogate scientists, they don't re-run experiments just to double check. They assist a journal's editors in editing--that's it.The check on published scientific results is the scientific process itself, not the publishing process. Prominent results attract further investigation, which confirms or disproves the reality of the underlying phenomena. Again: that's not the job of peer review.Do some people ascribe too much authority to peer review? Yes, for sure. IMO your comments in this thread are exacerbating that problem, not addressing it.
D-Machine: > A peer reviewer reads a paper and make comments on it. That's it! They don't check primary data, they don't investigate methods, they don't interrogate scientists, they don't re-run experiments just to double check. They assist a journal's editors in editing--that's it.Um, what? I have done all these things in reviews, and know other academics that have done these things as well. More confusingly though, if you are saying most reviewers don't do these things (which I agree with), this would only strengthen my point?I'll let readers decide if it is my comments that exacerbate the problem, or if, perhaps, it is apologism for journalistic peer review that might be causing bigger issues in the present day.
sega_sai: In astrophysics we now have Open Journal of Astrophysics that is basically an overlay on top of arxiv. https://astro.theoj.org/ It is now catching on with ~ 200 papers published last year, after some of the astro journals started to charge the Golden Open Access publishing fees. I think now people realize how crazy it is to pay for hosting a PDF and for sending your draft to a few referees without paying anything for their work.
frmersdog: Go read /r/LawyerTalk and enjoy the horror of the dawning realization that this is a lot of professionals. I think it's an issue that stems from getting too deep into the minutiae of the technical and cultural matters of one's field; you become a really good scientist, or lawyer, or SWE (by the standards of scientists and lawyers and SWEs), and end up coming to conclusions that everyone outside the bubble looks at and says, "That's absolutely asinine." Well, laymen just don't understand the details, you know? (Even though the whole point of these professions is to provide services to laymen, fix problems laymen come to them with, and guide laymen to make practical and logical decisions when a $500/hr appointment isn't called for.)These people take themselves too seriously, and other people only take them seriously when there are material ramifications for not doing so. Otherwise, they're viewed as pompous busy-bodies and don't do themselves any favors by playing to the role.
kergonath: > But we shpuld reverse the economics by having people pay to get their stuff peer reviewed.Not really. There would be perverse incentives where the publisher benefits from accepting more articles. For good journals that would be a conflict of interests at best where they would optimise the marketing-to-acceptance ratio. I can’t believe I am writing something good about scientific publisher, but at least when the reader pays they are incentivise to publish things that have an audience. Otherwise, they are going to cut corners, and I mean more than they currently do. And it’s not hypothetical, there are already terrible publishers doing this.
armoredkitten: I've long wished that "journals" and academic societies would transition from a publishing model to a cultivation model. If everything is available on arXiv, that's great, but it also means the best of the best is mixed in with all the rest.Journals (in the sense of whoever is on the editorial board) don't need to cease to exist; they just need to transition to "here's our list this month of what the best new articles are on X topic". The paper's already there on arXiv, you could already read it before. But having a group of editors that cultivate a list of good articles (as well as the peer review process that can, in an ideal world, serve to improve a paper) can serve to make sifting through arXiv less overwhelming, and draw attention to papers in particular subfields, subject matter, or whatever other criteria might be relevant.
convolvatron: I don't see any reason why we shouldn't make this transitive. working professionals track the literature. if there were a standard way to publish a "I think this paper is interesting" signal, then we could roll that all up. there are certainly practitioners that I really do trust to be in the game for the right reasons, if they think a paper represents a contribution, then that's a strong signal for me.
butILoveLife: >It was already a grueling 60-80 hour grind every week with frequent all nighters, high-pressure deadlines, absolute minimal pay, thankless duties, and plenty of politics.You know what else works really hard? A washing machine. Hard work alone doesnt create value. I could give you a spoon and tell you to dig a hole, or I can teach you how to use a Digger.
Aurornis: > You know what else works really hard? A washing machine. Hard work alone doesnt create value.My washing machine creates a lot of value for me. The time it saves me is incredibly valuable.Most machines that work really hard are valuable because they free up time.This wasn’t the clever burn you thought it was.
WalterBright: Value is what you're willing to pay for something.Laundromats aren't particularly profitable businesses.
BeetleB: I went to an R1 university. Most students did not have a 60-80 hour grind. If they did, it was because of an overbearing advisor. Years later, those students are not ahead of those who had a more relaxed advisor.And chances are: Those overbearing advisors are very invested in the current system.
currymj: it varies enormously by field.in CS you will have intense grind weeks around conference deadlines and a more manageable but challenging pace of life otherwise.in wet lab science you live by the schedule set by your experiments, which often involves intense hours.
haritha-j: Arxiv isn't the solution. But i think computer science conferences are. These have the same scientific rigour and standards in the review process as journals in other scientific fields, but don't price gouge. Yes, conferences are also a bit expensive, but you get a lot for your money, and they usally aren't out to make a big profit.
abought: Conferences can be truly wonderful, but not a universal replacement for publishing.If you think journals are expensive, try sending your whole lab to a conference in another country. That may not let you in. Where some of the attendees have to fill out paperwork before talking to a foreign national. (does that ever make for awkward small talk...)For all their many faults, journals provide access to a really wide audience, and- in theory- make it possible to form connections who wouldn't be able to meet directly.
jpeloquin: Is the goal to get rid of the journals or ensure open access? Because the US already has open access mandates for federally funded research. Immediate and without embargo. https://www.lib.iastate.edu/news/upcoming-public-access-requ...
SubiculumCode: Complete hogwash of a comment, based almost entirely on your limited experiences, to denigrate academic scientists.If you even knew these people, you'd know that most that remain in academia never considered industry in the first place. These people were not rejected by industry. In fact, it is the other way around. *They rejected industry*. They did so, despite knowing they'd make more money, but chose to remain in academia because they wanted to spend their life pursuing research topics that interested them with independence. Sometimes they feel the fool when money is tight and the hours are relentlessly long, but never have I seen it happen because they were rejected by industry.
SoftTalker: Why isn't a citation just a citation. It's a pointer to a source, that's all. If it implies some standards have been applied or editorial or scientific review has been done, then that's going to have to be paid for by someone. TFA implies that doesn't happen: [and then] we stop doing all that stuff and then the cash just pours out. So a citation to an article in Nature isn't any better than one on arXiV.
gus_massa: > So a citation to an article in Nature isn't any better than one on arXiV.The real problem is that nobody can grade and compare article in different topics, so there are proxies like number of articles in "serious" journals (whatever that means[1]) and number of citations in "serious" journals (whatever that means[1]).Do we count also citations in X/Tweeter, FaceBook, WordPress [2], StackOverflow, ... ?If links in HN also count as citations, there are 3 additional citations for my last paper:http://www.example.com/gus_massa/very_good_paper_2026.pdfhttp://www.example.com/gus_massa/very_good_paper_2026.pdfhttp://www.example.com/gus_massa/very_good_paper_2026.pdf[1] Which journals are serious and which are paper mills? In the extremes the difference is clear, but there in the middle there is a gray zone.[2] A citation in Tao's blog in WordPress should be worth at least half official citation, or perhaps a whole point.
shellfishgene: This is discussed in the article.
j45: Maybe studies could be dual published in open access publications and private.Then you get the private branded badge social proof and access can continue.Also, til anyone can publish to arxiv.org?
azan_: I tend to agree, but keep in mind that most likely you just don't even bother reading the shittiest of the shittiest papers just based on title and abstract. And for every good article there are like 10 unindexed shitty ones.
bonsai_spool: > I want actual organic review by the community, not by tiny groups of gatekeepers.But this happens—and good work is cited and talked about. I can't tell if you work in science, but this latter part is obvious.
0xbadcafebee: [delayed]