Implementing SourceCred on the Forum

This has been an interesting thread to read. Some of the discussion here is adjacent to my own area of expertise. I want to address a couple of problems with using measurement for incentives, as well as offer a few ways to adjust to the problem.

Here’s the key takeaway:

  1. Relying only on quantitative data for quality control does not work. A mixed quantitative and qualitative approach is needed.

This post builds on the comments I made in two previous threads, where I talked about the importance of accounting for intangible sources of value alongside those more easily measurable.

The goal of implementing source cred as stated in the opening post is: This tool allows a community to identify and incentivize online behaviors through reward mechanisms…This fits into our overall engagement strategy of elevating thoughtful, researched, and insightful contributions to the forum.

The base mechanism here, is to observe/measure/identify desirable behaviours, and reward/incentivise them. This is a common approach, measurement, or observation, is at the heart of how our liberal democracies function these days. Measuring/observing what people do changes the observer, and the observed implicitly and explicitly. This phenomena goes quite deep, and beyond humanity. Measuring anything specifically, including light, changes how it behaves. But we’ll stick to people here.

We collect data, and then we make decisions based upon that data. This approach was solidified and prioritised during the neoliberal reforms in the 1980s, where the idea of evidence-based policy came to the fore. We’ve been doing this for about 40 years now, and in that time some very clear problems have arisen. As Roy Bhasker notes in critical realism, there is a difference between the real world, and the world we observe. A simple example would be our inability to perceive certain wavelengths of light. I’ll make the point in three separate ways:

  1. Joseph Stiglitz (2018, p.13) describes the problem of measurement effectively: What we measure affects what we do. If we measure the wrong thing, we will do the wrong thing. If we don’t measure something, it becomes neglected, as if the problem didn’t exist. This brings us back to the points I made about intangible sources of value that go unaccounted for. Pure quantitative approaches inhibit discussions of quality.
  2. Daniel Kahneman talks about this in a psychology context where describes a phenomena he calls “What you see is all there is.” Basically we are cognitively biased to privilege the information we see in front of us, and we rarely consider known unknowns when making decisions. The implications of this are that when we focus decision making on evidence, we only consider what we measure, which is problematic because much of value is intangible.
  3. Goodhart’s law – when a measure becomes a target, it ceases being a good measure. Basically, anything objective that is incentivized becomes game-able. Qualitative judgments can inhibit this.

These three points demonstrate the problem of measurement. Some sources of value are measurable, some are not. Good outcomes result from both being accounted for in decision-making structures.

Methodologically, there are two ways we generate knowledge, quantitative and qualitative research. For a variety of reasons, we systematically privilege the quantitative side, which has resulted in the problems described above. Broadly speaking, quantitative approaches cannot take context into account, because context is rarely reducible to numbers.

So here is a question when it comes to the stated aims. How do you define ‘thoughtful, researched, and insightful contributions’? In my view this cannot be reduced to numbers. The stated aim is quality, not quantity. This requires a qualitative facet, alongside a quantitative one.

This is the key point that I want to make in this post. If you want to incentivize quality content on this forum, then there must be a qualitative judgements made alongside quantitative data analysis. Both qualitative judgements and quantitative data are paramount, they build off and support each other.

One heuristic that may be useful is that is that process based indicators can be more useful than outcome based indicators for governance. You don’t want to quantitatively judge the outcome, because that’s a qualitative judgement. But you can use a quantitative approach to monitor the processes that generate the outcome and then test the effectiveness of the measures using the qualitative judgements to see if the desired outcome is achieved.

Source cred, or a similar quantitative approach, can be a very useful tool to systematize the quantitative side. But if you only systematize the quantitative side, then that is all people see (what you see is all there is). There must also be a systematic approach to make qualitative (quality) judgements on the quantitative data. The quantitative data generated should be supplemented by people casting value judgements.

SCRF already does this through comment of the month, and an argument could be made that ‘likes’ fulfill a similar function. There will need to be more thought put into how to approach this.

I could say a lot more on this topic – I recently wrote a PhD on it – but I’ll stop here for now. If there’s any questions or further interest, I’m happy to engage.

5 Likes