Open Peer Review Proposal | Request for Feedback

Open Peer Review Proposal | Request for Feedback

Peer review has historically been a mechanism within the scientific pipeline for filtering out low-quality research and providing feedback for improvement. Rigorous review has traditionally only been available to those striving for publication in a journal. However, many independent researchers, especially in Web3, do not have publication as a goal. This post outlines a proposal for an open peer review pilot to be implemented at the Smart Contract Research Forum (SCRF) that provides rigorous review. As part of this, we want to understand what crypto-native peer review would look like. We begin by paying our reviewers and will seek to explore how emerging technology could benefit the peer review process.

Feedback on the process outlined below is requested and appreciated in order to help improve.

The creation of this process involved consideration of the sources collected in this knowledge repository.

Our goals with this pilot are to:

  • Increase the avenues to feedback for independent researchers

  • Explore the effects of financially incentivizing public peer review

  • Collect and share data from the process

  • Identify how to improve our design

  • Grow our international community of academics, researchers, developers, and industry experts

Our process:

In the spirit of creating an OPEN peer review system, the authorā€™s research and the peer reviews for that research will be publicly available on the SCRF Forum.

Our process is divided roughly into three steps:

  • Screen & Select

  • Post & Review

  • Synthesize Results

Screen and Select:

In this step, we identify the researchers and corresponding reviewers who will be the core participants in our open review.

Researchers: We will source web3 researchers from our network (and by our we really mean @eleventh19ā€™s) who are working on problems in web3 such as DAO Governance, cryptoeconomics, and blockchain who have research ready for review.

Reviewers: For each paper, we will recruit at least three reviewers who are knowledgeable about the subject area. We will tell the reviewer the authorā€™s name so that they can self-report whether or not they have any conflicts of interest.

Meta-reviewer: For each paper, we will recruit one meta-reviewer who is responsible for evaluating the quality of the reviews. Evaluating the quality of reviews is important so that we have a metric to determine how different variables influence the quality of a review.

Post and Review:

In this step authors share their research, reviewers share feedback, and authors respond.

Authors will visit smartcontractresearch.org and make an account if they do not already have one. They will create a new topic in the ā€˜Peer Reviewā€™ category that includes a post with their research to be reviewed. Research can be posted directly on the forum as text, PDF, or as a link to where the research can be read.

Reviewers are requested to follow the instructions in Appendix A, sign their review, and post it on the forum during a narrowly-defined window of time to ensure authors get feedback in a timely manner.

Authors are requested to respond, rate the review, and follow the instructions in Appendix B

Meta-reviewers are requested to respond, rate the review, and follow the instructions in Appendix C.

Templates to use will be provided based on the items in the Appendix.

We select reviewers to ensure that the authors receive quality feedback from multiple sources with deep insight. However, in the spirit of open peer review, we strongly encourage anyone to leave reviews on the papers.

Synthesize Results:

In the final step, we will aggregate reviewer ratings into an overall view of the paper and conduct interviews with all contributors. The purpose of the interview is to understand the contributorsā€™ perspectives on the process we used, how the public nature affected them, and how much the financial or reputational incentive motivated them.

Lastly, we will payout reviewers and meta-reviewers. In this first round, we will pay out $500 for their contribution to making better science. $500 was chosen as a baseline partly in order to be in accordance with the 450-movement, adjusting for inflation of course.

To Conclude:

This is the first round of open peer review. We plan to experiment with many variables (incentive structure, review format, the anonymity of authors & reviewers, etc.) in order to find a peer review process that works best for all parties and leads to the highest quality research. Throughout our efforts, we hope to provide peer review for independent researchers who would not normally have it, provide financial rewards to peer reviewers for their time, and to collect data that will help all peer review designers create an optimal process for their objectives. We look forward to working openly with the community throughout this process!

Acknowledgment: Weā€™d like to thank Dr.Nihar Shah for his time, advice, questions, and an excellent resource surveying peer review.

This process is still a draft and we would sincerely appreciate your feedback to help us improve all parts of this process. Please leave any feedback about this process on this post.

Appendix A: Instructions for Reviewers

Thank you for reviewing a research paper as part of the Smart Contract Research Forumā€™s Open Peer Review Project! We appreciate your contribution to the scientific discourse of web3 research. We humbly request from all reviewers a moment of your time:

Please familiarize yourself with these guidelines before writing your review:

  • Please remember that we are not asking for an accept or reject decision as we are not a journal publication or a conference proceeding. We are instead creating a forum for scientific discussion and improvement of research.
  • Please provide feedback on the scientific content of the paper.
    • As a peer reviewer, your role is not to proofread or check for typographic errors but to understand the authorā€™s scientific intent and any potential missteps.
    • When you find flaws, please help the author overcome them by suggesting how to fix them or by suggesting alternative approaches.
  • Please be empathetic and professional. As part of fostering a learning environment, we want to avoid any rudeness or arguments that attack the person rather than the position of the paper.
  • Please keep in mind that peer review is not just beneficial for the author, but it is beneficial for you as well! Reviewers report that peer reviewing helps them:
  1. advance their own research by stimulating new ideas and becoming aware of new methods

  2. stay informed with the latest research in their field before their peers

  3. reduce misinformation from being disseminated into the world

  • Please remember that your reviews may be relied upon not just by the author of the paper but by readers of it in attempting to critically assess it.
  • Bear in mind, that your reviews and your name will be publicly available to anyone reading the Smart Contract Research Forum. You can add your reviews to your CV since anyone can access them. We encourage you to take the time to make a review you will be proud of.
  • Please share your review within two weeks after the research has been posted.

Please include the following in your review:

  • Disclosures of any competing or conflicting interests including but not limited to: organization affiliations, competing research theories, prior beliefs on the topic, funding sources, knowledge of the author(s), etc.
  • A brief summary of the paper, its objective, and findings.
  • Suggested revisions to improve the legitimacy and credibility of the substance of the paper.
  • Strengths of the paper.
  • Answers to the following multiple-choice questions. When answering these questions, copy and paste the questions into your review and delete the choices you do not select. Please provide a brief explanation of your selection.
    • Please rate this paper on scientific correctness (does the research have an identifiable question or objective? Are sound methods and/or statistics used? are the conclusions logically based on the results?).
      • I am confident that the paper is technically sound, and I have carefully checked the details.
      • The paper appears to be technically sound, but I have not carefully checked the details.
      • The paper has minor, easily fixable, technical flaws that do not impact the validity of the main results.
      • The paper has major technical flaws.
    • Please rate this paper on organization and clarity of language (is the writing easy to follow? are concepts well-explained? is the document well-organized?),
      • The paper is well-organized and clearly written.
      • The paper is well organized but the presentation could be improved.
      • The paper is somewhat clear, but some important details are missing or unclear.
      • The paper is unclear and very hard to understand.
    • Please rate this paper on openness and reproducibility (is the data available? is the code? are the methods explained clearly enough to be replicable?)
      • Key resources (e.g., proofs, code, data) are available and key details (e.g., proof sketches, experimental setup) are comprehensively described for competent researchers to confidently and easily reproduce the main results.
      • Key resources (e.g., proofs, code, data) are available and key details (e.g., proofs, experimental setup) are sufficiently well-described for competent researchers to confidently reproduce the main results.
      • Key resources (e.g., proofs, code, data) are unavailable but key details (e.g., proof sketches, experimental setup) are sufficiently well-described for an expert to confidently reproduce the main results.
      • Key details (e.g., proof sketches, experimental setup) are incomplete/unclear, or key resources (e.g., proofs, code, data) are unavailable.
    • How confident are you in your understanding of this paper? Why?
      • I have checked all points of the paper carefully and was sure that I knew exactly what the author meant in their writing. I am certain I did not miss any aspects that could otherwise have impacted my evaluation.
      • I checked the important points carefully and felt pretty sure I understood what the author meant in their writing. It is unlikely, though conceivable, that I misunderstood some aspects that could have impacted the quality of my evaluation.
      • Thereā€™s a chance I misunderstood some aspects of the paper that I do not have the depth of knowledge required to confirm what the author writes is accurate. Moreover, I may not have carefully checked some of the details, e.g., proof of a theorem, experimental design, or statistical validity of conclusions.
      • I am able to defend my evaluation of some aspects of the paper, but it is quite likely that I missed or did not understand some key details.
      • This paper was very confusing to me. My evaluation is an educated guess.
  • Any other questions or comments about the research which was not captured here?
  • Please share any thoughts on how we can improve the peer review process in this forum post.
Appendix B: Instructions for Authors Responding to Reviews

Thank you for evaluating reviews of research papers at the Smart Contract Research Forum! We appreciate your contribution to the scientific discourse of web3 research. When responding to reviewers we humbly request that you do the following:

Please familiarize yourself with these guidelines before writing your response:

  • Please remember that reviewers are attempting to help authors improve research and have dedicated a significant amount of time to doing so.
  • Please respond to each reviewer individually and make it clear which revisions you are implementing and where in the paper you have done so.
  • Please be empathetic and professional. As part of fostering a learning environment, we want to avoid any rudeness or arguments that attack the person rather than the position of the paper.
  • Bear in mind, that your reviews and your name will be publicly available to anyone reading the Smart Contract Research Forum. You can add your reviews to your CV since anyone can access them. We encourage you to take time to make a review you will be proud of.

Please include the following in your response:

  • Disclosures of any competing or conflicting interests including but not limited to: organization affiliations, competing research theories, prior beliefs on the topic, funding sources, knowledge of the author(s), etc.
  • A response to suggested revisions and for each suggestion an explanation of if the revision was implemented and how
  • Answers to the following multiple-choice questions with a brief explanation of your ratings for each:
    • Please rate this review on the level of understanding it showed of the research
      • This review showed a thorough understanding of the research objective, methods, and conclusions
      • This review mostly understands the research objective, methods, and conclusions but shows that it may have missed some small details
      • This review partly understands the research objective, methods, and conclusions but misses significant details
      • This review does not understand the research objective, methods, and/or conclusion
    • Please rate this review on tone
      • This review makes use of appropriate language and is written with complete professionalism and respect,
      • This review is written with professionalism and respect but is occasionally condescending
      • This review is written with some professionalism and respect but makes use of inappropriate language or is condescending
      • This review is written with little professionalism and respect
    • Please rate this review on helpfulness (did it help you strengthen your paper?)
      • This review provided clear areas to strengthen the substance of the paper and easy-to-understand descriptions of how to improve
      • This review provided clear areas to strengthen the substance of the paper and descriptions of how to improve
      • This review provided areas to strengthen the paper and/or descriptions of how to improve
      • This review did not provide substantive or clear areas to strengthen the paper
  • Any other questions or comments about the review which was not captured here?
  • Any thoughts on how we can improve the peer review process? Please share them in this forum post or in our final interview/survey
Appendix C: Instructions for Meta-reviewers

Thank you for evaluating reviews of research papers at the Smart Contract Research Forum! We appreciate your contribution to the scientific discourse of web3 research. When responding to reviewers we humbly request that you do the following:

Please familiarize yourself with these guidelines before writing your response:

  • Please remember that reviewers are attempting to help authors improve research and have dedicated a significant amount of time to doing so.
  • Please be empathetic and professional. As part of fostering a learning environment, we want to avoid any rudeness or arguments that attack the person rather than the position of the paper.
  • Bear in mind, that your reviews and your name will be publicly available to anyone reading the Smart Contract Research Forum. You can add your reviews to your CV since anyone can access them. We encourage you to take the time to make a review you will be proud of.

Please include the following in your response:

  • Disclosures of any competing or conflicting interests including but not limited to: organization affiliations, competing research theories, prior beliefs on the topic, funding sources, knowledge of the author(s), etc.
  • Answers to the following multiple-choice questions with a brief explanation of your ratings for each:
    • Please rate this review on the level of understanding it showed of the research
      • This review showed a thorough understanding of the research objective, methods, and conclusions
      • This review mostly understands the research objective, methods, and conclusions but shows that it may have missed some small details
      • This review partly understands the research objective, methods, and conclusions but misses significant details
      • This review does not understand the research objective, methods, and/or conclusion
    • Please rate this review on tone
      • This review makes use of appropriate language and is written with complete professionalism and respect,
      • This review is written with professionalism and respect but is occasionally condescending
      • This review is written with some professionalism and respect but makes use of inappropriate language or is condescending
      • This review is written with little professionalism and respect
    • Please rate this review on helpfulness (did it help you strengthen your paper?)
      • This review provided clear areas to strengthen the substance of the paper and easy-to-understand descriptions of how to improve
      • This review provided clear areas to strengthen the substance of the paper and descriptions of how to improve
      • This review provided areas to strengthen the paper and/or descriptions of how to improve
      • This review did not provide substantive or clear areas to strengthen the paper
  • Any other questions or comments about the review which was not captured here?
  • Any thoughts on how we can improve the peer review process? Please share them in this forum post or in our final interview/survey
10 Likes

@UmarKhanEth I really appreciate you posting this and working toward your project in general.
Overall, I think this is a good initial protocol, but I did have some clarifying questions that I hope you can answer.

From the Screen and Select section, Iā€™m interested in some of that selection process. It does make some sense that a convenience sample of researchers would be pulled in through a network connection, but I am curious how you will be selecting how reviewers and meta-reviewers will be selected.

Iā€™m also particularly interested in the meta-reviewers as I have seen systems like this elsewhere. Is there a training procedure they are going through or anything like that? The templates in the appendixes are likely helpful, but I wonder if getting to some type of consensus on how to use them is part of your plan?

Iā€™m also interested in how you might address ā€œfrom the wildā€ type of reviews. There are planned reviewers and meta-reviewers in this proposal, but what happens to reviews from people not involved directly in the project? As this will be on the forum here, any forum user could presumably also use the templates and offer a review. Is there a plan for that type of interaction?

4 Likes

Do you have a sense of how incentivization would work with this model? Iā€™m particularly interested in when you think it would be best to introduce compensation, would it be a sourcecred-like model where youā€™re rewarded periodically based on the performance to date (which might maximize engagement, etc) or would steer toward something more like an honorarium, paying them in advance for their work. Also interested in the scale of compensation, since this is historically something thatā€™s considered almost like a monastic pursuit, do you think we should pay a fair market rate for peopleā€™s time or would this be more like a sweetner, a little extra to make the emotional labor a little more palatable.

Aside from my questions, @UmarKhanEth I also wanted to thank you for bringing this to the forum and your obvious hard work on this! Iā€™m really excited to see how this all plays out!

4 Likes

@UmarKhanEth Echoing @zube.paul and @jmcgirk in thanking you for this contribution to the forum. You say in your opening paragraphā€¦

I appreciate that conventional publication is something these authors might be trying to ā€œrise above,ā€ so to speak. But in that case, what exactly is their motivation, other than to improve their own thought and articulation process? Do they wish to short-circuit the protracted publishing process and get their work into the hands of industry more quickly for real world results, for example? Raise the level of community discourse and coherence? Please tell us a bit more.

Regarding reviewer incentives, I thought you made it clear that you were paying $500 per review, starting immediately. Is there more to that part of the story?

1 Like

Hey @zube.paul, @jmcgirk and @rlombreglia! Thank you all for your engagement and questions!

This is a great question that has been a challenge for us. At least for our first round of peer review, weā€™re planning on mimicking journals and doing this very similarly to how weā€™re selecting researchers: by leveraging existing relationships and network connections. As a backup, we are also sending out a survey to gauge general interest from online communities. Weā€™ll be posting this in group chats, forums, etc. Here is a link: https://forms.gle/xwBBpAGBf2UQKNJb7

The more, the merrier :)

What do you see as alternative ways of finding reviewers?

We donā€™t currently have any training planned. But this is a great point and something we will need to think about doing. There is an interesting study that showed junior reviewers, when trained, had 30% of the highest-rated reviews (as determined by meta-reviewers) at a conference while other reviewers, not trained at that conference, only had 14% of the highest-rated reviews. The value of training to improve quality is clearly demonstrated.

I wonder what a training program for meta-reviewers may look like. Have you seen something like this before? My initial thought is to provide examples of reviews and ratings they should receive to ā€˜calibrateā€™ meta-reviewers.

Yes! We encourage open reviews from anyone. The reviewer will qualify for a grant just like our recruited reviewers, so long as it is of the same quality. (i.e, spam reviews wonā€™t be rewarded).


Weā€™re keeping it simple to start and offering a ā€˜Peer Review Grantā€™ of $500 to those who successfully complete a review. Meta-reviewers will also be given a grant, of the same amount, for every three reviews they review.

This is a great point and a difficult one. So far, our thinking has been that individualsā€™ hourly rates can vary wildly. Some individuals will charge hundreds of dollars per hour of work and this may not be something we can support. Weā€™ve tried setting a baseline of ā€˜fairā€™ compensation for the amount of work by offering $500. For high-earning individuals or those that do not want to be paid for review, they will have the option of sending this money to a charity of their choice.

In the 450 movement, the author arrives at a conclusion that $50 for 9 hours of work or $150 for three hours of work leads to $450 both ways and would be a fair rate. This is in line with a study that shows, on average, a peer review takes 6 hours. A rate of 75 per hour for six hours also leads to $450. Weā€™ve slightly increased the amount of funds to $500.

Part of the difficulty with this question is also that there is historically no agreed-upon market rate for peer review. Weā€™re throwing a bit of a shot in the dark but itā€™s the best target we have right now based on what weā€™ve been able to learn. Iā€™d love to update this number based on what othersā€™ think or our findings from the first round.


Part of what weā€™ve seen is a lot of researchers posting directly online to pre-print platforms like arXiv or SSRN, or in some cases to blogging sites like Mirror or Medium. This is definitely a lot faster than going through a publicationā€™s peer-review process which can often take months or years. Yet, exposing scientific analyses to criticism is a crucial part of the scientific process.

This growing trend of directly posting papers online is something weā€™ve seen particularly in the Computer Science and Physics communities. Separately, weā€™ve also seen it among Web3 researchers who work at DAOs instead of Universities. While being published in a journal is something the academic system rewards heavily with things like tenure or grant funding, it doesnā€™t appear to have the same incentive in Web3. Here, it may be enough to get hired to send someone a link to your writing so they can become familiar with your thought process or thinking on a specific matter.

I do hope researchers who participate in peer review on SCRF are motivated to receive feedback that helps them improve their research. This is really what itā€™s all about ā€“ the pleasure of finding things out, the seeking of more precise, more accurate truth. I hope I donā€™t sound too naive :sweat_smile:

2 Likes

Iā€™d like to share some literature summaries weā€™ve been working on, motivated, inspired, and pushed by the researchers on this forum.

They can all be found (including links to the paper) on our knowledge repo.

Here are some TL;DRs:

Shah, Nihar B. Challenges, Experiments, and Computational Solutions in Peer Review. Communications of the ACM, Vol. 65 No. 6, Pages 76-87

TLDR:

  • Peer review is supposed to be an objective, rational, and fair process for improving research and filtering out bad research. However, it is challenged by the biases of the humans involved and often has subjective or insufficient outcomes. Studies with fictitious manuscripts have found that peer reviewers only detect between 1/3 and 1/4 of errors on average.
  • When running a peer review, it is difficult to find reviewers with the right expertise, to prevent dishonest behavior, and to account for the biases and differences of people involved. These biases include preferring to see positive results, especially those which confirm the reviewerā€™s prior beliefs, by top authors or institutions.
  • There are some computational solutions being explored to solve these problems. However, there is often insufficient data because reviews are anonymized, not shared, and do not have a ground truth decision to compare to.

List, B. Crowd-based peer review can be good and fast. Nature 546, 9 (2017).

TLDR:

  • A synthetic chemistry journal tests collaborative, crowdsourced peer review with 100 anonymous experts and finds feedback is faster and more comprehensive than traditional peer review

Scientific Autonomy, Public Accountability, and the Rise of ā€œPeer Reviewā€ in the Cold War United States

Melinda Baldwin Isis 2018 109:3, 538-558

TLDR:

  • Peer Review as we know it today is a relatively recent phenomenon. While journal editors have often asked other experts for advice and occasionally forwarded segments of criticism to authors since the late 1600s, it has not become standard practice to consider peer review the distinguishing factor between science and not science until the late 20th century
  • The founding of the NIH in 1948 and of the NSF in 1950 saw the amount of funding for scientific research increase by a factor of 25 in just five years. By the 1970s this greatly increased the scrutiny scientists were under, especially by Congress, regarding how they decided where to allocate grant funds.
  • With a desire to continue influencing funding decisions, Scientists cast ā€˜peer reviewā€™ as ā€œthe crucial process that ensured the credibility of science as a whole.ā€œ

2021. F1000 Research Article outlining the market rates for publishing scholarly articles. By Alexander Grossman and Bjorn Brembs

TLDR:

  • Journal subscription rates have risen drastically over the past decade. In response, the authors provide a quantitative approach for determining the costs of publishing a scholarly article based on step-by-step calculations and numbers shared publicly or privately by publishers and service-providers
  • Costs range from less than $200 per article to about $1000 per article, depending on the size of the publisher, the use of pre or post-publication peer review, the rejection rate, and implemented technology.
  • With average subscription pricing of around $4000, these results indicate that publishers typically spend around $2200 on non-publication costs in order to maintain their reported 30% profit margin or $1200 per article.

Greaves, S., Scott, J., Clarke, M., Miller, L., Hannay, T., Thomas, A. & Campbell, P. (2006). Natureā€™s trial of open peer review nature international weekly journal of science.

TLDR:

  • Nature ran an open peer review trial in 2006 which gave authors the option to allow online, public comments on their paper to be a part of the editorā€™s consideration in publishing (in addition to ordinary peer review)
  • While expressed interest was high, participation was low among authors and there were sparse comments. 5% of authors agreed to open peer review and 38/71 (54%) of those papers received comments

(2006). Peer review and fraud. Nature, 444, 971-972.

TLDR:

  • After ā€˜Scienceā€™ published a fraudulent paper about human embryonic stem cells, Nature assesses the risk of fraud and how to prevent it. Could Open Peer Review have caught and prevented the publication?
  • The answer is No, at least based on Natureā€™s 2006 trial in which participation was too low because researchers were more focused on writing papers than volunteering to leave open, public reviews of their peers
5 Likes

After ā€˜Scienceā€™ published a fraudulent paper about human embryonic stem cells, Nature assesses the risk of fraud and how to prevent it. Could Open Peer Review have caught and prevented the publication?

What do you think of the idea of working in some kind of credibility score into the system? DARPA was working on a machine learning model designed to predict whether papers would be reproducible or not - https://www.darpa.mil/program/systematizing-confidence-in-open-research-and-evidence

3 Likes

This is super cool. Thanks for sharing! Iā€™m really curious to know how they implemented this.

Iā€™m in favor of credibility scoring ā€“ I think when done right it can make a subjective peer review process much more objective, especially when combined with AI/ML. I would love to implement something like this here but it would require a dataset of peer reviews and scores to train on. Iā€™m aware of few such public datasets. This is one of the biggest problems facing peer review ā€“ a lack of trusted anonymization technology to allow those who run the peer review process to release the reviews. Maybe something ZKP will be able to help with one day (EDIT: or more open review!)

3 Likes

This proposed peer-review practices really held my attention. I am particularly interested in the role of the meta-reviewer. This aerial view will require special skills in both deconstruction and synthesis/integration.

Has there been any discussion about a system for protection of human participants in research or is that presumed to be the authorā€™s responsibility? Please forgive the oversight if discussed above, but Iā€™m taking in lots of new information at once.

3 Likes

Hello @kdouglass! Thank you for your comment and question!

As youā€™ve highlighted, the metareviewer is a challenging role to fill and one of the most critical.

There has not yet been much discussion about protecting human participants in research. This is one area we could do more to encourage reviewers to be critical of the author. What sorts of protection do you think we should expect of researchers?

I think it would be fair to leave the responsibility to protect human participants to the researcher and then ensure they are behaving ethically. Perhaps, to be proactive, we can share some guidelines with the researcher based on what other orgs have already come up with.

1 Like

Sharing guidelines is a good start. That brings awareness to researchers they should at least be thinking about how they collect data or use existing datasets.


The following question helps me frame this conversation: what type of infrastructure best structures the Forumā€™s relationships with researchers, institutions (potential partners) and with the broader community?

We can think of a research-intensive, academic institution, e.g., as a ready-made infrastructure for the person who conducts research. The institution equips the researcher with a range of tools:
institutional reputation,
data collection and analysis software,
meeting spaces,
collaboration tools,
grant administration and
travel funding.

This infrastructure also includes tools to protect human participants in the researcherā€™s research. An apparatus and protocols for protection of the human participants in research also protects the researcher. These tools protect researchers from personal lawsuits, for example.

The research protection tools support the relationship between broader community and the institution as it carries out its ā€œthird missionā€. Good policies and practices are good public relations.

While researchers affiliated with SCRF could assume some responsibility for protections of human participants, this is highly subjective from person-to-person. Since many institutions and other online entities have already instituted systems of practices and protocols, SCRF can outsource protections for human participants. This outsourcing would need to involve a clear understanding of whose guidelines/process will be followed and how the processes will be documented with SCRF. This would provide the clear pathway/infrastructure researchers need to do their work. They would have clarity on, for example, the definition of consent and clarity on when they might be unintentionally involving human participants in research, without the participantsā€™ consent.

Being clear on the process for protecting human participants in research, even if outsourced, is a proactive risk mitigation strategy. It would allow SCRF to self-regulate. Also, it would protect the organization from being blind-sided with abuse claims, although researchers are acting independently. Finally, having a predictable, reliable process would give SCRF access to data about how protection of human participants in Web 3.0 research is developing.

4 Likes

Hi Umar, excited to see this open peer review project being facilitated! One thought I have in my mind about the incentive

So maybe could we have a tracking system on how many work one has reviewed and being reviewed, and maybe the reputation they build in the open peer review system is also some incentive for the work?

1 Like

@kdouglass This is a great framing for how to build a policy for protecting human participants in SCRF-affiliated research. This kind of policy would also protect us, and the researcher. It would make research much easier to start and stop because there will be clear, best practices that should be followed. Peer reviewers can use that policy as a reference to check if researchers are in violation.

I think this also would have wide-reaching implications for SCRF, beyond this peer review project, and ultimately inform a clear set of expectations for SCRF-funded research. Iā€™m curious to dig deeper into examples of institutions we could outsource protections to and examples of effective policies. Although outside the scope of this project, I could see this being a valuable area of inquiry for SCRF to document and propose an approach for future research ā€“ especially research relating to studies of ethnography, organizational design, or governance as itā€™s practiced within existing DAOs with real people. We should continue this conversation outside the peer review thread ā€“ I can reach out on discord!


Hi @GUA thanks for your comment and question!

Yes! Social incentives may even be stronger than financial incentives. One way we could build this incentive is a reputation tracker for reviewers which aggregates and display somewhere the ratings they receive from meta reviewers. They could then show this off, for example on a CV.

Publons is an example of a system like this for existing journals, that verifies the number of reviews a researcher has completed. Itā€™s received mixed results ā€“ many academic departments do not reward peer review even if verified. Within the open science community, Publons being bought by Clarivate was seen as shifting the intention of the site from realigning incentives to gathering data.

Itā€™s notable that where Publons rewarded and tracked the number of reviews, we would focus on number and quality. Hopefully, this would optimize for better scientific outcomes.

2 Likes