After the last SourceCred Guild meeting I attended, I was reflecting on SCRF’s SourceCred instance, and was struck with an image. @zube.paul and @brian.alexakis flying a plane into a storm.
Don’t worry! I’m projecting. As a long-time SourceCred contributor, who watched SourceCred (the project) rocket into the stratosphere, paying over $1M/year to contributors via Cred scores, only to see it crash to earth, I have a new respect for the power of these systems, as well as the challenges of ‘permissionless’ orgs generally. I often feel like a survivor, crawling from the rubble, shaking off trauma, muttering to myself about plane designs…
I also see some contributors expressing some common concerns around SourceCred (SC). This is expected. In addition to the novel challenges of algorithmic governance, reward systems have a tendency to surface all our issues.
I’ve spent the last five years working in permissionless systems with novel reward mechanisms (Decred, SourceCred). These systems have enabled me to work with the most amazing, diverse groups of humans I’ve ever encountered. And allowed me the freedom to work on what I find most meaningful. I think they’re the future. I’ve also seen some plane crashes.
Using sophisticated machine learning algorithms to reward contributors, governing the algorithms in a transparent and decentralized way, is like building a plane while flying it. The planes do fly. But I wanted to share some observations from the failures I’ve seen, which will hopefully enable the SCRF community (and others using SC) to govern their SC instances more effectively.
The SCRF Plane
So the SCRF plane (SC instance) looks solid. It appears to be one of the safest, smartest designs I’ve seen actually. The parameters make sense. The payout policies are well designed. The pilots (SC guild members) seem generally skilled, like they have some experience implementing complex systems. And I can see them putting in the hard work of proper governance: engaging the community, creating reports, incorporating feedback, addressing concerns, etc.
The rollout feels responsible. By starting with relatively small amounts (5,000 DAI/month), and systematically iterating parameters, the stakes feel relatively low. The org is diversified with other reward mechanisms (salaries, grants, contests, etc.). The community seems well-moderated, and the culture open to experimentation.
SCRF’s SC instance right now feels like a party bus with wings, flying low to the ground. A flying open bar for academics funded by crypto whales and VCs (or wherever SCRF’s funding comes from). At this stage, the plane would presumably be easy to land. Even a crash landing would probably go unnoticed by most.
Flight Risks
Short-term, typically the main risks for a SC instance are:
Maintenance
Since SC (the org) has winded down, the project no longer has paid plane mechanics (maintainers). It’s just a regular OSS project. The SCRF community has discussed this on the forum and decided the risks acceptable for now. I agree. SC has ‘broken through’ as a technology. It has some large users incentivized to make repairs (submit PRs with bug fixes), we recently found a dev to maintain the codebase and do critical bug fixes (woot!), and some core contributors are exploring ways to secure more sustainable funding (hope to have some news soon ).
SCRF can minimize these risks by:
- Being transparent about the risks (e.g. this post, updates, etc.)
- Not becoming too reliant on unique SC functionality until it has more organized support (i.e. keep it so that SC can be swapped for another reputation/valuation system if need be), or winded down without causing too much disruption. I have recommended that SCRF keep SC to Discourse for now, as the Discourse plugin is IMO the most robust and least likely to need fixes in the near-to-medium term.
- Keep up the governance work that makes it responsive to community input. As expressed by the SC Guild on a recent call, “keep SC something that the community is doing, not something that is happening to the community”. Similar to early feedback I’m hearing about Coordinape, where people don’t mind so much when Coordinape payouts aren’t very accurate because at least they had a voice, I find that when people have a voice in changing SC parameters, they tend to be happier with the results. Even in cases where I personally didn’t see that much improvement in the scores, as long as the results are perceived as directionally better, the act of changing the parameters is often meaningful to participants. This seems especially true when shared values can be translated directly into parameter choices (e.g. I think reviewing/commenting is generally undervalued, let’s increase the Cred that flows to post replies). Personally, I tend to lean towards governance minimization strategies, but have also seen communities get really into granular parameter choices (e.g. the Token Engineering Commons (TEC) instance has tons of semantic choices baked into parameter values). I suspect this will be a function of culture and implementation resources. A warning: people often want more control in theory but don’t want to do the tedious governance work, or don’t have the technical skills or resources. This can lead to disappointment if expectations aren’t managed.
Burnout
As I’ve experienced myself maintaining SC instances, proper maintenance can be a lot of work. Especially if taking it seriously, as SCRF has so far. Engaging the community for feedback, generating reports, meetings, documenting everything on GH, uploading to YouTube, etc. is a lot of work. Having maintained MakerDAO’s SC instance for 1.5 yrs, I have firsthand experience with how stressful this type of work can be. Especially if navigating DAO politics, exploring new parameter configurations, doing policy work around payouts, etc. Maybe I’m projecting again. But want to say I see you SC Guild, and make a couple suggestions:
- Relax: you’re doing great and the stakes are still low (flying the party bus)
- Prioritize automation and efficiency where possible: e.g. having everyone opt in every month makes sense to me conceptually and values-wise, but it adds admin overhead? I think it would probably be OK to leave people opted in, in part because a) inactive contributors will see their Cred scores diluted over time, and b) it appears that ‘Cred whales’ aren’t opting in anyway. This advice may be off if the guild has a more efficient way of updating the instance state (opt-ins/outs, payouts, balances). But if doing it the default way by updating the static site on GitHub, the process can be tedious, time-consuming and error prone for some, especially for non-devs. The admin load also generally increases linearly with the number of opted in contributors.
Medium term risks
Medium-term, risks around SC tend to be tied to raising reward amounts. When rewards increase significantly, approaching what people might make via traditional salaries, I’ve observed that there tends to be a psychological ‘flip’ in contributors. With the stakes higher, some become more reliant on the income (potentially even quitting their Trad job), and expectations can shift. This can be mitigated to an extend by clear communication and expectation setting. But humans are gonna human. Especially if deeply invested in established systems or trying to overthrow them. People can (consciously or subconsciously) start comparing SC to a job. Which obviously provides greater perceived income stability and guarantees (legal protections, benefits, etc.) than a typical SC instance. The plane is flying at 30,000 ft now. And people start asking questions about why the fuselage is shaking, why the fuel gauge is low, and when the plane is gonna land (it is landing, right? Right??).
Personally, having experienced the volatility of several tech startups, I prefer the volatility of SC to a job. A salary is flat, until it abruptly goes to zero. And the relationships you invest in turn into LinkedIn connections. In reality, the volatility of a flat salary is simply hidden. Usually by a centralized authority incentivized to withhold information from you and shift financial risk down the hierarchy. SC income is volatile, but payouts are determined by a distributed consensus on the value you create. It provides a different type of stability, which is not legible at first to people without experience in these systems. But apparently I’m an outlier here, presumably due to some combination of privilege, high risk tolerance, and experience with these systems. Most people I’ve noticed want more stability in their pay than I do. And creating stability via the algorithm is doable, but still relatively unexplored territory. Also worth noting that if SC payouts are high enough, they can start to compete with other reward systems (e.g. salaries), which can create governance labor.
I should also say here that, if the stakes are high enough, even scores the community generally thinks are fair are likely to be disruptive. If someone asks why their pay is X, and the answer is ‘because the pseudo-AI in the black box says so’, that may not be an acceptable answer, especially if someone is taking a pay cut from their traditional job. Data analysis and understanding of Cred flows is still nascent, so answering hard questions about how Cred scores are calculated can be difficult.
Ideological differences
I hesitate to even bring this up. I’m wary of stirring up ideological debates generally, and don’t want to impress my personal views on SCRF (WARNING: this only my opinion, and not that of the SC community or ecosystem). However, reward systems are viewed by many as inherently political. Indeed, for many in the space, building is an explicitely political act, prefigurative politics. And SC does afford an unprecedented opportunity to encode your values, in a real, reifying way. So it should not come as a surprise that SC often raises moral, political and ideological questions.
This can lead to some common pitfalls:
-
Proxy wars: crypto protocols have a history of ideological holy wars. Typically over technical changes to the protocol (e.g. the Bitcoin block size wars of 2017). Predictably, the conflict becomes about much more than the proposed technical change (e.g. increasing the Bitcoin block size by 1M). Often to the detriment of the discourse around the actual change, much like IRL proxy wars are never good for the country it happens in. If the stakes are high enough, and participants have differing incentives, discussions around even minor changes can spiral into polarized, black and white narratives detached from reality. For instance, in retrospect, we can look back at the block size debate and see that catastrophic predictions around increasing the block size were overblown. Several Bitcoin forks used larger block sizes and the sky didn’t fall. They’re still producing blocks as we speak.
If this dynamic happens predictably with decentralized lending protocols, it would be strange for it not to show up in a system valuing contributions, where ideological and value-based arguments may actually have more relevance to the technical change.
- Inflexibility: if you have based your identity on a particular political philosophy or ideology, and have taken an ideological stance on a particular change to the protocol, are you capable of continuing to participate if you lose? If you win, and things are getting noticeably worse, are you capable of seeing and acknowledging it? This is particularly problematic in sufficiently complex systems, where causation can be difficult to impossible to determine from correlation. I have myself fallen into this trap. Perhaps even as I write this ?
-
Scapegoating: If the community has a lot of conflict or disagreement, that tension has to surface somewhere… If there’s no way to resolve that conflict, some will look to blame SC as the source of their problems. I’m not saying SC won’t legit be the source of the problem in some cases. Just that the only situations I’ve seen SC blamed for a community’s problems, there also happened to be lots of unresolved conflict and other pressures not unique to that community. And the claims against the algorithm seemed weak. Though one potentially predictable issue is that conflict does make it more difficult to reach consensus on governing the algorithm, which can lead to more conflict…
Personally, the more experience I have, in all different types of orgs, the more I suspect scapegoating will always be present to some degree. I would just urge caution around claims that SC is the ‘root cause’, as that could be obscuring deeper issues.
Long distance flights
Longer-term risks are more difficult to assess. Technology-driven social change is so fast now (and accelerating) that nobody can really predict where this is all going. Any system powerful enough to create alternatives that can rival the current system contains both utopian and dystopian potential. I personally advocate for keeping an open mind and having an emergent strategy, as expressed by adrienne maree brown in her book of the same name. Realize that larger structural issues may not be solved in our lifetime, or even several generations. Do the hard work to create the psychological safety necessary to have real conversations, know that most experiments will fail but are necessary to move forward, iterate based on actual outcomes and don’t become so attached to ideological/conceptual commitments that they bind you to bad choices.
I also suspect an ethics of care may be more promising than coding of explicit rule-based ethics, as it provides more flexibility. Though have not yet had the opportunity to test that.
On leaderboards
As an example of a contentious issue, I’ve heard a couple people express concerns about the leaderboard in the SC instance. That it could introduce unwanted competitive dynamics.
Personally, I generally agree with this sentiment. I think we’ve overemphasized competition to the point it’s harmful in many contexts. I’ve been in corporate environments where leaderbaords seemed to drive toxic culture; I worked at a startup with leaderboards literally hanging over the heads of overworked sales and customer service reps, and was not surprised when it was sued for $145M for highly unethical business practices, rendering my stock options worthless
In practice, I’ve seen communities generally like the leaderboard. It seems to generate playful comments and healthy competition. For instance, Maker does bi-weekly reports on its SC instance that features two leaderboards: top 10 posts and top 10 Cred earners over the last two weeks. The comments on it are generally positive (e.g. the latest report). New people in particular seem to like it, as it gives them recognition they may not get anywhere else. Perhaps a bigger problem in DAOs today is the tyranny of the structurelessness, and visualizing invisible power structures helps promote more equality than flatly rejecting any structure that facilitates competition?
I honestly don’t know the answer for SCRF, or any community. But I will point out a couple options that don’t involve a leaderboard, should the community not want one:
- Don’t use the SC default instance: one can display the scores however they want (or not at all). The administrators can simply not turn on the creation of the public site, and only view the instance locally when making changes.
- Alternative visualizations: there are nearly infinite ways to visualize the basic SC data structure (a graph of nodes that represent users and contributions). Fields such as social graph analysis have produced many visualization templates to choose from.
Planes in flight
Fortunately, there are now a number of SC planes in the wild. While the governance surface presented by SC is large and relatively unexplored, a swath of the possibilities have been explored, and SCRF can go there in relative safety, should it choose.
The most comparable SC instance would be MakerDAO, which has been paying significant rewards (~20,000 DAI/mo) for governance contributions on its Discourse for nearly two years. Contributors are generally happy with the instance. It has served decentralization, and appears to be proving itself particularly useful as a recruitment mechanism. A number DAO contributors with full-time positions in Maker Core Units (CU)) have credited SC for bringing them into the project. Maker recently adjusted its elevation down a bit (decreasing total payouts from 20,000 DAI/mo to 14,000), and created a second plane, specifically designed to compensate delegates in its governance system. The main instance pays out ~10,000 DAI/mo, and the delegate instance ~4,000 DAI/mo.
I would characterize Maker’s SC instances as the first DC-10s, regular ferrying contributors to more conventional DAO roles and outcomes.
SCRF could do something similar, customized to its goals, and operationalize without too much risk IMO. A sort of top of funnel for the DeSci revolution ?
On Moonshots
Google’s PageRank algorithm (what SC uses to score contributions) was inspired by academia’s citation system. SourceCred was founded by an AI engineer in Google’s Deep Brain group, who quit Google to create a credit attribution system based on PageRank that would be free of corporate capture. Which subsequently escaped the lab and was used by a DeSci project (SCRF) to create a new cryptoeconomic system so powerful it replaced centralized academic institutions within a decade. Like Wikipedia replacing encyclopedias, or cryptocurrencies and DeFi replacing the Fed and banks, in the span of a few years, the ivory tower was burning, as academics ran free into the greenfields of knowledge. Scholars were finally paid for knowledge they create (peer reviewers too this time!), not just based on where they were born, or how well they play the politics of top-down hierarchies captured by monied interests. The algorithms were governed in a decentralized manner where people had a genuine voice. Viva la Revolution!
Yeah…I should warn you, when people discover SC for the first time they tend to get a little… manic. The basic primitive (arguably, a credibly-neutral intersubjective valuation by a community) provides new technological affordances that make possible many dreams previously deemed infeasible. People often get very excited. In SC, we call this initial excitement being ‘sourcepilled’.
Here’s the thing about sourcepills though…they wear off. The SC technology is complex, and often unwieldy. It works well for a number of use cases. But bending it to your will to do bigger, more precise things is often difficult. To the point many give up. This has left more than a few with a crushing hangovers and high opportunity costs, swearing off sourcepills forever.
While I believe many ambitious dreams are made possible by this technology, many of those will require considerable technical resources (developers, data analysts, governance experts, etc.), as well as work in fields such as algorithmic governance, cryptoeconomics, machine learning, sociology, org science, etc, etc… I generally advise projects to view SC as a source of ‘signal’, to be used as input into other mechanisms. And the signal can be noisy.
The good news is, the core CredRank algorithm (the engine) is solid tech, the existing plugins (the plane models) (Discourse, Discord, GH) provide much room for experimentation. The algorithm can be programmed to an extent at the level of configuration and surrounding policy. It’s also possible to create new plugins (aircraft) with the right resources. The plane flies. But it can be a bumpy ride.
It’s important that communities not view SC as the cure for all its problems, but as one piece of a larger puzzle. Cred scores, viewed as ‘signal’, can be useful input into any number of mechanisms. For instance, even a noisy measurement of value can be used to create good-enough binary outputs like Sybil detection, making feasible experiments in more democratic, human-centric governance.
I myself have just enough knowledge and experience of the algorithm and codebase to see a fuzzy, shifting view of the possibilities. And the possibilities are real. But you should know, I’m largely doing it by feel, without being able to make precise predictions or guarantees. I may have been on sourcepills when I wrote this.
DeSci does seem particularly exciting, because the scientific process and professional norms of academia constrain the scope of the problem in a way that could curtail some of the problems SC has seen in other contexts. And the internalized norms will survive a long time, even when more freedom is created…
You are Flying
I want to leave you with a suggestion: you are flying the plane. Yes, the larger plane (SC instance) is collectively governed. However, you can also think of your SC (Discourse) account like a plane of its own. Unlike a typical institution, where you contribute value in a thousand ways, hoping a person at the top will eventually give you a binary outcome (grant, job), each contribution in SC is like a micro grant. But with the payout determined by a rolling vote of your peers.
Like Bitcoin, the distributed consensus algorithm is probabilistic. But in SC, you receive rewards in proportion not to computing power but the value you deliver, as determined by the community, not a centralized authority. You no longer need to ask permission to get rewarded for your work! Once you realize that mind shift, it can be a very empowering and fun experience
Of course, the jet fuel (money) will run out eventually, unless you can figure out how to coordinate with the other pilots to produce revenue. But that is for another post.