Implementing SourceCred on the Forum

Important News about SourceCred

Just came across this :confused: The organization behind SourceCred has dissolved.

Truth is, SourceCred had been dealing with a lot of challenges lately which were hard to overcome. We have also experienced this in Metagame where SourceCred had been an important cornerstone of the system and a crucial part of how people are compensated.

I’m not sure yet what this will mean for SourceCred as a product but I guess that without the team behind it, using it for compensation will no longer be feasible.

A moment of silence…

@zube.paul @jmcgirk @Larry_Bates @valeriespina

6 Likes

I had thought about this before my original post, in that I see a long-term issue that may arise. If one of the goals of SCRF is to create an environment in which building a reputation within SCRF can have some sort of “weight” associated with it, closing off the capacity to “weigh” in fact creates a closed perimeter and not an “open” environment. On the one hand, a closed perimeter is easier to monitor and measure; however this creates the same closed gate quandary that “deSci” seems to be attempting to open.

I don’t want to accidentally create another closed environment that is transparent in one way while isolationist and self-important in another.

Albeit, this is why I posed these questions as hypothetical situations and not definitive concepts that I believe should be hard lines. I don’t have a fixed perspective on implementation; but I do have hesitation in creating a closed environment that COULD potentially be made more open with some extra time paid to attempting to find standard weights that the community agrees are representative.

4 Likes

I’m not in a position to evaluate the truth of this statement. But if it turns out to be accurate, then it seems SCRF should be looking to a comparable mechanism from a competitor. The only one I can think of is Coordinape (which I’m not even sure is a direct competitor), but Coordinape’s foundation in “gift economies” strikes me as interesting.

5 Likes

I found this article that suggests academics are more concerned with getting recognition than financial rewards:

"3. The electronic questionnaire contained 11 questions (seven closed and four open‐ended) to consider motivations for the secondment and thoughts about reward and recognition.

  1. Eighty‐six percent of respondents were not concerned with financial reward. Ninety percent of respondents felt that financial reward was important to their managers.

  2. Ninety‐one percent of respondents felt recognition for their work was highly important."

Again, this is not to say that a reward mechanism to compensate monetarily shouldn’t be part of the equation. I am inclined to lean towards a focus on reputation and recognition to get the kind of engagement that the forum is looking for rather than just “pure” engagement for the sake of engagement. That then becomes the question; does the forum want “engagement for the sake of engagement” or does the forum want “high quality engagement”?

I was assuming it was the latter, but discussion may come to indicate that the former is what the experiment is going for.

5 Likes

The problem with using reputation and recognition to compensate our academic friends is that “preftige” (as the most obnoxious students call it) tends to spring from ancient wells, and is rooted in thousand-year-old academic traditions, hence the necessity of the regalia and parchment degrees written in Latin. I don’t think we at SCRF would be able to give them a valuable reputation boost just yet. But exposure is definitely useful for academics once they’re building a case for tenure, so podcasts, social media call-outs and press clippings would be a big deal. Research funding when it’s unrestricted can also be useful, especially for someone like a postdoc or adjunct/visiting assistant professor who hasn’t yet wriggled into the tenure system, but of course they’re also likely to be woefully underpaid and would probably appreciate cold hard cash just as much

4 Likes

I am trying to move away from assumptions that academics want money “to post on SCRF” because they’re underpaid, as that is a logical fallacy.

I understand the impetus to give underpaid academics money as an incentive to post on SCRF, however I’m not convinced that will give us the actual result we want and will invite SCRF to be gamed by people looking to “make money” rather than “underpaid academics looking to improve their reputation”.

I think this conversation has been heavily weighted by assumptions that data is showing not to be necessarily accurate or valid. Again, I am not “against” this type of experiment whatsoever. I am however looking at the data and the results of the initial test run and am concerned that SCRF is just creating another closed-wall environment, whereas I was under the impression the goal was to create an “open” environment.

I DON’T think SCRF itself would give a reputation boost and am not trying to imply that SCRF should be able to build reputations “now” compared to the notion that “over time” people’s work should be able to build their reputation. Currently, our system would skew towards people who are paid to post on the forum. In fact, they would end up getting double rewarded because the system would favor the people that get paid to engage, as they would clearly have the most engagement.

Already, this type of system approach is putting the SCRF employees above the community members as the test run came back with SCRF employees on the top of the list. That is the exact scenario I believe will end up having a long-term net negative effect because it will inherently skew towards the SCRF team over time.

Considering the results would continuously skew towards SCRF members, there seems to be a serious need to make sure that doesn’t happen if the goal is to get “community participation”.

5 Likes

I think you’re bringing in a valuable perspective, and as you know I’m mostly trying to keep the conversation on here going… I do wonder whether it is a logical fallacy that academics might want to receive compensation posting on SCRF, though.

From my perspective, it seems like incentivizing graduate students has helped encourage content production, and I suspect that the problem we have generating engagement comes more from audiences not knowing about SCRF than improper incentives – although if there’s a good way to test that, I think that would be really valuable.

I see SourceCred as a way of making it easier for people to earn money commenting on SCRF, without having to go through the invoicing system, one that’ll really benefit us once we have a larger audience. For what it’s worth, I think SCRF can give grad students or people like me, honestly who are trying to burnish their blockchain credentials a bit of a CV boost.

I think we’re all in agreement that we should either exclude the engagement team or heavily penalize us from the rankings, unless we decide to change our compensation so it depends on SourceCred performance, which I find a bit scary.

5 Likes

I don’t think it’s “necessarily” a logical fallacy, however the data from that study suggests it might be. I think the biggest concern I have is that this particular structure doesn’t “need” to have the monetary incentive attached to it to function. That it can function without the DAI reward and track activity seems to be enough of a test bed to see if it can accurately gauge the type of engagement we are looking for before attaching monetary compensation to that system. I think the assumption that “there needs to be monetary payment” associated with this part of the system may be the logical step that goes further than it might need to.

If we can encourage quality engagement and participation from the community by tracking their engagement score, does adding monetary reward invite too much negative consequence when that part of the system may not be necessary?

I am all for compensating people for their time and effort. In that same context, I do believe the premise that academics will be more incentivized to post might be coming from a place where the notion that “monetary compensation” has been accepted as a proper incentive; when the data suggests that may not be the case.

We may find that “monetary compensation through a reward system” is a fantastic incentive to work on bounties, but not particularly a good incentive for getting people to post.

4 Likes

Repeating a comment I posted in the chat during today’s Community Call on SourceCred:

My two cents about compensation and gaming: SCRF should be generous, not parsimonious, with compensation, but if anyone is found to be gaming the system they are instantly thrown to the gutter without mercy.

To clarify: 1) I want the compensation structure to be generous because we want the very best contributions. 2) I don’t assume that people on our system are gaming. However, because humanity includes individuals who take advantage of good people to get something for nothing, our response (if they show up on our forum) should be swift and unambiguous. My belief is that the combination of these two ground-rules are the fastest path to a premier forum.

4 Likes

The results from the initial SourceCred experiment are promising! I am seeing a few positive trends:

  • long-term consistent posting gets a high cred score
  • high-quality content leads to a high cred score over time
  • posting original topics can be weighted more than commenting or engaging
  • discourse anti-spam machine learning can be a major benefit to preventing gaming
  • the forum is moderated, which takes a lot of the possibility for gaming out of long-term cred scores
  • the cred score on its own can be an incentive mechanism
  • the potential for long-term payouts for quality content is much more viable with a system like SourceCred in place

Another philosophical question that came up on the call was “should topic creation have more weight than commenting or engagement?”

Considering SourceCred gives the capacity to give weight to the creator of a topic, I do personally believe that the topic creator should have slightly more weight in general; as the topic creation is what gives rise to the possibility for commenting or engagement and is the primary need.

Overall, I definitely agree with Ralph’s sentiments concerning leaning towards generous rewards rather than trying to compensate as little as possible. The aforementioned benefits created by moderation and discourse make gaming the system a much less likely possibility to occur, especially in tandem with the culture that has been created.

4 Likes

How would we go about dealing with a Sokol hoax sort of thing (i.e. someone starts cranking out academic-sounding gibberish using an AI writer in response to comments)?

2 Likes

I have held back my comments about SourceCred until I had the chance to actually use it and learn about how I can generate useful data from it. Now that I am at this point, I look forward to help provide a new context to this discussion by providing data and helping to analyze the generated SourceCred graphs.

This data set is built against the current state of this forum and I have shown the timeframe of the entire history of the forum. It is possible to narrow the timeframe to; this week, last week, last month.

I am available to fill requests of any particular kind of data set, values for nodes and edges.

If anyone who is comfortable in the CLI, I am happy to give a walkthrough of my data extraction process so you can run these builds locally yourself.

Find the link to the SourceCred deck below.

3 Likes

I think our community would quickly identify this sort of thing.

Once the content is removed (no longer publicly reachable) it will no longer be a source of cred as the content node itself would no longer exist. That would also mean that it no longer would serve as a source of cred through any edges it was connected to. Such as a reference to that post from another post or to a user that liked the post.

5 Likes

Thank you for your answer, @brian.alexakis – what do you think are the biggest risks from doing this as a community? I agree that we’re more likely to be too polite and stingy than we are to turn against one another and snipe for cash, but I wonder if we might amplify a certain attitude or approach to work in the comments generated… curious to see it

4 Likes

The Node and Edge weights our community will eventually decide upon will be public information. Everyone who looks will know which kinds of actions will be lucrative. On the surface, this sounds like a direct path to people exploiting that information which I suppose is the biggest risk I can think of. However, this is kind of by design and something I consider a positive risk. I think that a benefit of people knowing the reward structure is that their good behavior will more effectively help the forum achieve its goals.

As a community we are going to experiment how changing the node and edge weights affect how DAI (or whatever coin) gets distributed. So our sense of what is risky may also shift as we learn from our results.

Even if the worst case risk scenario of bad behavior plays out, I still view it as net positive. We will learn from that, adjust, and agree about how to try again.

I am currently working on creating a selection of the top 3 node and edge weights to start with. There will be a community meet that will host a discussion about those values and any adjustments we might want to make. After that there will be a poll hosted here on the forum along with an agreed upon amount of DAI to distribute. More details on all of soon!

5 Likes

This has been an interesting thread to read. Some of the discussion here is adjacent to my own area of expertise. I want to address a couple of problems with using measurement for incentives, as well as offer a few ways to adjust to the problem.

Here’s the key takeaway:

  1. Relying only on quantitative data for quality control does not work. A mixed quantitative and qualitative approach is needed.

This post builds on the comments I made in two previous threads, where I talked about the importance of accounting for intangible sources of value alongside those more easily measurable.

The goal of implementing source cred as stated in the opening post is: This tool allows a community to identify and incentivize online behaviors through reward mechanisms…This fits into our overall engagement strategy of elevating thoughtful, researched, and insightful contributions to the forum.

The base mechanism here, is to observe/measure/identify desirable behaviours, and reward/incentivise them. This is a common approach, measurement, or observation, is at the heart of how our liberal democracies function these days. Measuring/observing what people do changes the observer, and the observed implicitly and explicitly. This phenomena goes quite deep, and beyond humanity. Measuring anything specifically, including light, changes how it behaves. But we’ll stick to people here.

We collect data, and then we make decisions based upon that data. This approach was solidified and prioritised during the neoliberal reforms in the 1980s, where the idea of evidence-based policy came to the fore. We’ve been doing this for about 40 years now, and in that time some very clear problems have arisen. As Roy Bhasker notes in critical realism, there is a difference between the real world, and the world we observe. A simple example would be our inability to perceive certain wavelengths of light. I’ll make the point in three separate ways:

  1. Joseph Stiglitz (2018, p.13) describes the problem of measurement effectively: What we measure affects what we do. If we measure the wrong thing, we will do the wrong thing. If we don’t measure something, it becomes neglected, as if the problem didn’t exist. This brings us back to the points I made about intangible sources of value that go unaccounted for. Pure quantitative approaches inhibit discussions of quality.
  2. Daniel Kahneman talks about this in a psychology context where describes a phenomena he calls “What you see is all there is.” Basically we are cognitively biased to privilege the information we see in front of us, and we rarely consider known unknowns when making decisions. The implications of this are that when we focus decision making on evidence, we only consider what we measure, which is problematic because much of value is intangible.
  3. Goodhart’s law – when a measure becomes a target, it ceases being a good measure. Basically, anything objective that is incentivized becomes game-able. Qualitative judgments can inhibit this.

These three points demonstrate the problem of measurement. Some sources of value are measurable, some are not. Good outcomes result from both being accounted for in decision-making structures.

Methodologically, there are two ways we generate knowledge, quantitative and qualitative research. For a variety of reasons, we systematically privilege the quantitative side, which has resulted in the problems described above. Broadly speaking, quantitative approaches cannot take context into account, because context is rarely reducible to numbers.

So here is a question when it comes to the stated aims. How do you define ‘thoughtful, researched, and insightful contributions’? In my view this cannot be reduced to numbers. The stated aim is quality, not quantity. This requires a qualitative facet, alongside a quantitative one.

This is the key point that I want to make in this post. If you want to incentivize quality content on this forum, then there must be a qualitative judgements made alongside quantitative data analysis. Both qualitative judgements and quantitative data are paramount, they build off and support each other.

One heuristic that may be useful is that is that process based indicators can be more useful than outcome based indicators for governance. You don’t want to quantitatively judge the outcome, because that’s a qualitative judgement. But you can use a quantitative approach to monitor the processes that generate the outcome and then test the effectiveness of the measures using the qualitative judgements to see if the desired outcome is achieved.

Source cred, or a similar quantitative approach, can be a very useful tool to systematize the quantitative side. But if you only systematize the quantitative side, then that is all people see (what you see is all there is). There must also be a systematic approach to make qualitative (quality) judgements on the quantitative data. The quantitative data generated should be supplemented by people casting value judgements.

SCRF already does this through comment of the month, and an argument could be made that ‘likes’ fulfill a similar function. There will need to be more thought put into how to approach this.

I could say a lot more on this topic – I recently wrote a PhD on it – but I’ll stop here for now. If there’s any questions or further interest, I’m happy to engage.

5 Likes

Great observations and I absolutely agree that an automated approach alone is not the answer for quality here. Thank you for the citations as well. Kahneman I had read before, but I’m looking forward to digging into Stiglitz.

As you noted, there are a few other incentive mechanisms and also the content production process we have at SCRF. Human moderation is also going to remain an important part of SCRF and the forum. Considering your background, it would be valuable to have your input into the moderation practices and approaches here as well.

6 Likes

I have been thinking through what a large scale decentralised system for content moderation would look like, but this falls on the other end of the spectrum at the moment.

Right now I think your moderation here is good - the forum seems well managed, and there is quality discussion. I also like how you compensate for things like research summaries and general work done. The problems I talk about above usually come with growth, because it’s hard to scale qualitative moderation. A big challenge for the development of communities is how to grow, without compromising the community. What works at a small scale does not translate to large scale.

I assume that you are looking to expand, with the implementation of source cred and financial incentivisation. The motivation behind my previous post was to try and get in front of the problem most suffer when they move to data driven solutions, that previous effective qualitative work went unseen until it is lost, at which point it is too late.

The key is to maintain and adapt how you moderate as you grow. How can you change and adapt how you operate during growth to ensure that you maintain the same quality of content?

You might find this paper interesting as well, then. It goes into more detail about how measurement generates outcomes, and is where I got the “measure the process, not the outcome” idea from.

5 Likes

I am curious, isn’t SourceCred open source?

Also, the statement “let the organization die to make room for new things to grow.” seems they intend for the community to take over and thus make it 100% open source.

3 Likes

SourceCred is open source since the beginning. My general understanding is that it’s MIT licensed, though the license in the code says “SourceCred is dual-licensed under the MIT License and the Apache-2
License. See LICENSE-MIT and LICENSE-APACHE for the terms of these
licenses.” :point_down:

source: I’m a longtime SourceCred contributor and not a lawyer

5 Likes