Flying the SourceCred Plane

After the last SourceCred Guild meeting I attended, I was reflecting on SCRF’s SourceCred instance, and was struck with an image. @zube.paul and @brian.alexakis flying a plane into a storm.

Don’t worry! I’m projecting. As a long-time SourceCred contributor, who watched SourceCred (the project) rocket into the stratosphere, paying over $1M/year to contributors via Cred scores, only to see it crash to earth, I have a new respect for the power of these systems, as well as the challenges of ‘permissionless’ orgs generally. I often feel like a survivor, crawling from the rubble, shaking off trauma, muttering to myself about plane designs…

I also see some contributors expressing some common concerns around SourceCred (SC). This is expected. In addition to the novel challenges of algorithmic governance, reward systems have a tendency to surface all our issues.

I’ve spent the last five years working in permissionless systems with novel reward mechanisms (Decred, SourceCred). These systems have enabled me to work with the most amazing, diverse groups of humans I’ve ever encountered. And allowed me the freedom to work on what I find most meaningful. I think they’re the future. I’ve also seen some plane crashes.

Using sophisticated machine learning algorithms to reward contributors, governing the algorithms in a transparent and decentralized way, is like building a plane while flying it. The planes do fly. But I wanted to share some observations from the failures I’ve seen, which will hopefully enable the SCRF community (and others using SC) to govern their SC instances more effectively.

The SCRF Plane

So the SCRF plane (SC instance) looks solid. It appears to be one of the safest, smartest designs I’ve seen actually. The parameters make sense. The payout policies are well designed. The pilots (SC guild members) seem generally skilled, like they have some experience implementing complex systems. And I can see them putting in the hard work of proper governance: engaging the community, creating reports, incorporating feedback, addressing concerns, etc.

The rollout feels responsible. By starting with relatively small amounts (5,000 DAI/month), and systematically iterating parameters, the stakes feel relatively low. The org is diversified with other reward mechanisms (salaries, grants, contests, etc.). The community seems well-moderated, and the culture open to experimentation.

SCRF’s SC instance right now feels like a party bus with wings, flying low to the ground. A flying open bar for academics funded by crypto whales and VCs (or wherever SCRF’s funding comes from). At this stage, the plane would presumably be easy to land. Even a crash landing would probably go unnoticed by most.

Flight Risks

Short-term, typically the main risks for a SC instance are:


Since SC (the org) has winded down, the project no longer has paid plane mechanics (maintainers). It’s just a regular OSS project. The SCRF community has discussed this on the forum and decided the risks acceptable for now. I agree. SC has ‘broken through’ as a technology. It has some large users incentivized to make repairs (submit PRs with bug fixes), we recently found a dev to maintain the codebase and do critical bug fixes (woot!), and some core contributors are exploring ways to secure more sustainable funding (hope to have some news soon :crossed_fingers: ).

SCRF can minimize these risks by:

  • Being transparent about the risks (e.g. this post, updates, etc.)
  • Not becoming too reliant on unique SC functionality until it has more organized support (i.e. keep it so that SC can be swapped for another reputation/valuation system if need be), or winded down without causing too much disruption. I have recommended that SCRF keep SC to Discourse for now, as the Discourse plugin is IMO the most robust and least likely to need fixes in the near-to-medium term.
  • Keep up the governance work that makes it responsive to community input. As expressed by the SC Guild on a recent call, “keep SC something that the community is doing, not something that is happening to the community”. Similar to early feedback I’m hearing about Coordinape, where people don’t mind so much when Coordinape payouts aren’t very accurate because at least they had a voice, I find that when people have a voice in changing SC parameters, they tend to be happier with the results. Even in cases where I personally didn’t see that much improvement in the scores, as long as the results are perceived as directionally better, the act of changing the parameters is often meaningful to participants. This seems especially true when shared values can be translated directly into parameter choices (e.g. I think reviewing/commenting is generally undervalued, let’s increase the Cred that flows to post replies). Personally, I tend to lean towards governance minimization strategies, but have also seen communities get really into granular parameter choices (e.g. the Token Engineering Commons (TEC) instance has tons of semantic choices baked into parameter values). I suspect this will be a function of culture and implementation resources. A warning: people often want more control in theory but don’t want to do the tedious governance work, or don’t have the technical skills or resources. This can lead to disappointment if expectations aren’t managed.


As I’ve experienced myself maintaining SC instances, proper maintenance can be a lot of work. Especially if taking it seriously, as SCRF has so far. Engaging the community for feedback, generating reports, meetings, documenting everything on GH, uploading to YouTube, etc. is a lot of work. Having maintained MakerDAO’s SC instance for 1.5 yrs, I have firsthand experience with how stressful this type of work can be. Especially if navigating DAO politics, exploring new parameter configurations, doing policy work around payouts, etc. Maybe I’m projecting again. But want to say I see you SC Guild, and make a couple suggestions:

  • Relax: you’re doing great and the stakes are still low (flying the party bus)
  • Prioritize automation and efficiency where possible: e.g. having everyone opt in every month makes sense to me conceptually and values-wise, but it adds admin overhead? I think it would probably be OK to leave people opted in, in part because a) inactive contributors will see their Cred scores diluted over time, and b) it appears that ‘Cred whales’ aren’t opting in anyway. This advice may be off if the guild has a more efficient way of updating the instance state (opt-ins/outs, payouts, balances). But if doing it the default way by updating the static site on GitHub, the process can be tedious, time-consuming and error prone for some, especially for non-devs. The admin load also generally increases linearly with the number of opted in contributors.

Medium term risks

Medium-term, risks around SC tend to be tied to raising reward amounts. When rewards increase significantly, approaching what people might make via traditional salaries, I’ve observed that there tends to be a psychological ‘flip’ in contributors. With the stakes higher, some become more reliant on the income (potentially even quitting their Trad job), and expectations can shift. This can be mitigated to an extend by clear communication and expectation setting. But humans are gonna human. Especially if deeply invested in established systems or trying to overthrow them. People can (consciously or subconsciously) start comparing SC to a job. Which obviously provides greater perceived income stability and guarantees (legal protections, benefits, etc.) than a typical SC instance. The plane is flying at 30,000 ft now. And people start asking questions about why the fuselage is shaking, why the fuel gauge is low, and when the plane is gonna land (it is landing, right? Right??).

Personally, having experienced the volatility of several tech startups, I prefer the volatility of SC to a job. A salary is flat, until it abruptly goes to zero. And the relationships you invest in turn into LinkedIn connections. In reality, the volatility of a flat salary is simply hidden. Usually by a centralized authority incentivized to withhold information from you and shift financial risk down the hierarchy. SC income is volatile, but payouts are determined by a distributed consensus on the value you create. It provides a different type of stability, which is not legible at first to people without experience in these systems. But apparently I’m an outlier here, presumably due to some combination of privilege, high risk tolerance, and experience with these systems. Most people I’ve noticed want more stability in their pay than I do. And creating stability via the algorithm is doable, but still relatively unexplored territory. Also worth noting that if SC payouts are high enough, they can start to compete with other reward systems (e.g. salaries), which can create governance labor.

I should also say here that, if the stakes are high enough, even scores the community generally thinks are fair are likely to be disruptive. If someone asks why their pay is X, and the answer is ‘because the pseudo-AI in the black box says so’, that may not be an acceptable answer, especially if someone is taking a pay cut from their traditional job. Data analysis and understanding of Cred flows is still nascent, so answering hard questions about how Cred scores are calculated can be difficult.

Ideological differences

I hesitate to even bring this up. I’m wary of stirring up ideological debates generally, and don’t want to impress my personal views on SCRF (WARNING: this only my opinion, and not that of the SC community or ecosystem). However, reward systems are viewed by many as inherently political. Indeed, for many in the space, building is an explicitely political act, prefigurative politics. And SC does afford an unprecedented opportunity to encode your values, in a real, reifying way. So it should not come as a surprise that SC often raises moral, political and ideological questions.

This can lead to some common pitfalls:

  • Proxy wars: crypto protocols have a history of ideological holy wars. Typically over technical changes to the protocol (e.g. the Bitcoin block size wars of 2017). Predictably, the conflict becomes about much more than the proposed technical change (e.g. increasing the Bitcoin block size by 1M). Often to the detriment of the discourse around the actual change, much like IRL proxy wars are never good for the country it happens in. If the stakes are high enough, and participants have differing incentives, discussions around even minor changes can spiral into polarized, black and white narratives detached from reality. For instance, in retrospect, we can look back at the block size debate and see that catastrophic predictions around increasing the block size were overblown. Several Bitcoin forks used larger block sizes and the sky didn’t fall. They’re still producing blocks as we speak.

    If this dynamic happens predictably with decentralized lending protocols, it would be strange for it not to show up in a system valuing contributions, where ideological and value-based arguments may actually have more relevance to the technical change.

  • Inflexibility: if you have based your identity on a particular political philosophy or ideology, and have taken an ideological stance on a particular change to the protocol, are you capable of continuing to participate if you lose? If you win, and things are getting noticeably worse, are you capable of seeing and acknowledging it? This is particularly problematic in sufficiently complex systems, where causation can be difficult to impossible to determine from correlation. I have myself fallen into this trap. Perhaps even as I write this :thinking: ?

  • Scapegoating: If the community has a lot of conflict or disagreement, that tension has to surface somewhere… If there’s no way to resolve that conflict, some will look to blame SC as the source of their problems. I’m not saying SC won’t legit be the source of the problem in some cases. Just that the only situations I’ve seen SC blamed for a community’s problems, there also happened to be lots of unresolved conflict and other pressures not unique to that community. And the claims against the algorithm seemed weak. Though one potentially predictable issue is that conflict does make it more difficult to reach consensus on governing the algorithm, which can lead to more conflict…

    Personally, the more experience I have, in all different types of orgs, the more I suspect scapegoating will always be present to some degree. I would just urge caution around claims that SC is the ‘root cause’, as that could be obscuring deeper issues.

Long distance flights

Longer-term risks are more difficult to assess. Technology-driven social change is so fast now (and accelerating) that nobody can really predict where this is all going. Any system powerful enough to create alternatives that can rival the current system contains both utopian and dystopian potential. I personally advocate for keeping an open mind and having an emergent strategy, as expressed by adrienne maree brown in her book of the same name. Realize that larger structural issues may not be solved in our lifetime, or even several generations. Do the hard work to create the psychological safety necessary to have real conversations, know that most experiments will fail but are necessary to move forward, iterate based on actual outcomes and don’t become so attached to ideological/conceptual commitments that they bind you to bad choices.

I also suspect an ethics of care may be more promising than coding of explicit rule-based ethics, as it provides more flexibility. Though have not yet had the opportunity to test that.

On leaderboards

As an example of a contentious issue, I’ve heard a couple people express concerns about the leaderboard in the SC instance. That it could introduce unwanted competitive dynamics.

Personally, I generally agree with this sentiment. I think we’ve overemphasized competition to the point it’s harmful in many contexts. I’ve been in corporate environments where leaderbaords seemed to drive toxic culture; I worked at a startup with leaderboards literally hanging over the heads of overworked sales and customer service reps, and was not surprised when it was sued for $145M for highly unethical business practices, rendering my stock options worthless :sob:

In practice, I’ve seen communities generally like the leaderboard. It seems to generate playful comments and healthy competition. For instance, Maker does bi-weekly reports on its SC instance that features two leaderboards: top 10 posts and top 10 Cred earners over the last two weeks. The comments on it are generally positive (e.g. the latest report). New people in particular seem to like it, as it gives them recognition they may not get anywhere else. Perhaps a bigger problem in DAOs today is the tyranny of the structurelessness, and visualizing invisible power structures helps promote more equality than flatly rejecting any structure that facilitates competition?

I honestly don’t know the answer for SCRF, or any community. But I will point out a couple options that don’t involve a leaderboard, should the community not want one:

  • Don’t use the SC default instance: one can display the scores however they want (or not at all). The administrators can simply not turn on the creation of the public site, and only view the instance locally when making changes.
  • Alternative visualizations: there are nearly infinite ways to visualize the basic SC data structure (a graph of nodes that represent users and contributions). Fields such as social graph analysis have produced many visualization templates to choose from.

Planes in flight

Fortunately, there are now a number of SC planes in the wild. While the governance surface presented by SC is large and relatively unexplored, a swath of the possibilities have been explored, and SCRF can go there in relative safety, should it choose.

The most comparable SC instance would be MakerDAO, which has been paying significant rewards (~20,000 DAI/mo) for governance contributions on its Discourse for nearly two years. Contributors are generally happy with the instance. It has served decentralization, and appears to be proving itself particularly useful as a recruitment mechanism. A number DAO contributors with full-time positions in Maker Core Units (CU)) have credited SC for bringing them into the project. Maker recently adjusted its elevation down a bit (decreasing total payouts from 20,000 DAI/mo to 14,000), and created a second plane, specifically designed to compensate delegates in its governance system. The main instance pays out ~10,000 DAI/mo, and the delegate instance ~4,000 DAI/mo.

I would characterize Maker’s SC instances as the first DC-10s, regular ferrying contributors to more conventional DAO roles and outcomes.

SCRF could do something similar, customized to its goals, and operationalize without too much risk IMO. A sort of top of funnel for the DeSci revolution :person_shrugging: ?

On Moonshots

Google’s PageRank algorithm (what SC uses to score contributions) was inspired by academia’s citation system. SourceCred was founded by an AI engineer in Google’s Deep Brain group, who quit Google to create a credit attribution system based on PageRank that would be free of corporate capture. Which subsequently escaped the lab and was used by a DeSci project (SCRF) to create a new cryptoeconomic system so powerful it replaced centralized academic institutions within a decade. Like Wikipedia replacing encyclopedias, or cryptocurrencies and DeFi replacing the Fed and banks, in the span of a few years, the ivory tower was burning, as academics ran free into the greenfields of knowledge. Scholars were finally paid for knowledge they create (peer reviewers too this time!), not just based on where they were born, or how well they play the politics of top-down hierarchies captured by monied interests. The algorithms were governed in a decentralized manner where people had a genuine voice. Viva la Revolution!

Yeah…I should warn you, when people discover SC for the first time they tend to get a little… manic. The basic primitive (arguably, a credibly-neutral intersubjective valuation by a community) provides new technological affordances that make possible many dreams previously deemed infeasible. People often get very excited. In SC, we call this initial excitement being ‘sourcepilled’.

Here’s the thing about sourcepills though…they wear off. The SC technology is complex, and often unwieldy. It works well for a number of use cases. But bending it to your will to do bigger, more precise things is often difficult. To the point many give up. This has left more than a few with a crushing hangovers and high opportunity costs, swearing off sourcepills forever.

While I believe many ambitious dreams are made possible by this technology, many of those will require considerable technical resources (developers, data analysts, governance experts, etc.), as well as work in fields such as algorithmic governance, cryptoeconomics, machine learning, sociology, org science, etc, etc… I generally advise projects to view SC as a source of ‘signal’, to be used as input into other mechanisms. And the signal can be noisy.

The good news is, the core CredRank algorithm (the engine) is solid tech, the existing plugins (the plane models) (Discourse, Discord, GH) provide much room for experimentation. The algorithm can be programmed to an extent at the level of configuration and surrounding policy. It’s also possible to create new plugins (aircraft) with the right resources. The plane flies. But it can be a bumpy ride.

It’s important that communities not view SC as the cure for all its problems, but as one piece of a larger puzzle. Cred scores, viewed as ‘signal’, can be useful input into any number of mechanisms. For instance, even a noisy measurement of value can be used to create good-enough binary outputs like Sybil detection, making feasible experiments in more democratic, human-centric governance.

I myself have just enough knowledge and experience of the algorithm and codebase to see a fuzzy, shifting view of the possibilities. And the possibilities are real. But you should know, I’m largely doing it by feel, without being able to make precise predictions or guarantees. I may have been on sourcepills when I wrote this.

DeSci does seem particularly exciting, because the scientific process and professional norms of academia constrain the scope of the problem in a way that could curtail some of the problems SC has seen in other contexts. And the internalized norms will survive a long time, even when more freedom is created…

You are Flying

I want to leave you with a suggestion: you are flying the plane. Yes, the larger plane (SC instance) is collectively governed. However, you can also think of your SC (Discourse) account like a plane of its own. Unlike a typical institution, where you contribute value in a thousand ways, hoping a person at the top will eventually give you a binary outcome (grant, job), each contribution in SC is like a micro grant. But with the payout determined by a rolling vote of your peers.

Like Bitcoin, the distributed consensus algorithm is probabilistic. But in SC, you receive rewards in proportion not to computing power but the value you deliver, as determined by the community, not a centralized authority. You no longer need to ask permission to get rewarded for your work! Once you realize that mind shift, it can be a very empowering and fun experience :slight_smile:

Of course, the jet fuel (money) will run out eventually, unless you can figure out how to coordinate with the other pilots to produce revenue. But that is for another post.


Hey @s_ben, beautiful and insightful analysis of SCRF’s SourcCred (SC). I must say that the analogy and imagery are the USP of this post, great job.

To clarify things out, I have a few questions:

  • In a system like SCRF’s SC, where there is a constant inflow of subscribers, can you share actionable ways to create stability in pay via the algorithm?

  • In talking about the “Medium term risks” you raised, leaning on your experience as a SourceCred contributor and your first hand experience on how SCRF’s SC works, what would be your maximum monthly SC rollout recommendation to keep things sustainable on SCRF’s SC?


Good content @s_ben , this is a topic i’ve really been trying to put up, now i’m really happy to see it at last. Well, a reward for contribution is cool if the platform/organization has got some good source of funding and it’s ready to make some sacrifices and able to manage the system, cause i’ve heard and read on some platforms that went into bankruptcy due to bad management and exploitation by some.
Now talking about SCRF and it’s 5000 DAI reward for contribution, well i feel there can be an increase if the probability of it not affecting the system is checked and deemed to produce a positive result rather than a negative one.

i’m agreeing to this view that an increase in payout/reward propel an increase in labor, cause everyman wants to see a reward for whatever he’s participating in and the time he spends “cause time is money” even when the reward is little there would be an urge to participate and everyone would want to put in his their best, but when there’s a higher reward on SC there would be a higher level of participation and this would attract more participators and increase the labor force to produce a higher yield, cause an increase in efficiency and thereby project the organization/platform into a higher level of stability.

Well @Ulysses this is a good question. This is just like an economic situation, paying compensation that are over the market rate (known inside financial matters as “efficiency wages”) can be an imperative spurring drive for your existing worker base. The instinct is direct: higher compensation makes a work more alluring. Since there’s an inflow in the number of subscribers, there should be an increase in the funding so reward can be tangible even if not to the highest level but at least average. Because there’s a daily increase in the inflow of SC subscribers, pay will continue to drop and remain unstable if it’s same 5000 DAI amount, and if not increased, there would be a lesser governance labor when compared to an increased pay situation.


Increase in Inflow of Members ==> Increase in Total Pay Amount ==> Increase in Participation = Growth

Under normal circumstances, they should be directly proportional to one another, and so there’s a forward flow, but there’s going to be a backward flow & a decline in labor/efficient participation when there is a continuous inflow of subscribers and compensation amount/ Pay is left constant.


Danke. Curious, did the DALLE images inform the meaning of the words, or was it more just eye candy?

Interesting you use the word subscriber. To me subscribers pay, instead of being paid.

Income stability could be created in a number of ways. The most obvious mechanism to use would be the payout formula, which is separate from the Cred scores. For instance, the formula could be that everyone is paid the same amount of DAI if their weekly Cred score stays above a certain threshold. You could also base payouts on different functions of the (weekly) Cred scores. For instance you could use lifetime Cred scores (sum of all weekly Cred scores), which are more slow moving.

There are also parameters you can tweak to change Cred scores directly. For instance, there is an ‘alpha’ parameter, which controls the ‘leakage’ of Cred from node-to-node. Increasing this parameter effectively allows Cred to ‘spread’ further from where it is created, creating a ‘flatter’ distribution. This could create more ‘equality’ in the sense that less active contributors could see higher Cred scores over time. Here’s an article on how the algorithm works that may be accessible to those with some high-level technical knowledge. There are also more indirect ways to create stability by tweaking Cred flows. For instance, if you can identify activity that is underpaid, you may be able to ‘boost’ it via parameter changes .

My gut tells me you could probably go pretty high. Up to 20k/mo like Maker did, perhaps higher. However, I do not trust my gut that far into the future. I would recommend slowly increasing the payouts and changing course if needed based on community feedback. But that is assuming that SCRF has the political will and bandwidth to manage the instance. Which I am not assuming at this point. Increasing payouts could increase labor costs and introduce new issues, such as “Cred farming” (contributors creating posts primarily to make money), gaming, new social dynamics, etc. It’s also not clear to me what the community sentiment is. In the initial post proposing SC, I saw some common concerns, and am curious what people think now after 5k/mo DAI has been distributed for a couple months.


Have any examples you can share?

I think it will affect the ‘system’ regardless, especially if payouts are increased. And that ‘positive’ is highly subjective, so hard to measure unforch.

I think it’s pretty obvious by now that money will attract contributors, and that SC would produce a higher ‘yield’ in the form of more contributions (posts, comments, etc.). Efficiency and stability however assume a defined output, as well as a business model presumably if the stability is economic in nature. I don’t think SC solves this on its own. There have been a few efforts I know of to use SC to direct contributors towards certain outcomes, but I have yet to see any that were demonstrably successful. It can increase engagement, which can indirectly progress a community’s stated objectives. But because valuations are determined largely by the community, this essentially allows contributors to get paid for all value they create, not just the value that advances those objectives. If the payouts are substantial enough, this could even subvert more traditional forms of authority, as you’ve now taken away that authority’s main mechanism for exerting power over (revoking pay). Even worse, by empowering contributors, you have lessened their fear. And most hierarchies rely primarily on fear to maintain power relations.

Ok now I’m drifting into personal politics I fear. And there are other valid interpretations I’m leaving out. But I will just say, it seems clear that decentralizing valuation does not on its own produce efficiency or stability. It’s possible SC is creating more value overall compared to other mechanisms. But to create economic stability, a community must still produce something to trade for fait currency, similar to a company. Or, as some in the DAO space are now postulating, become its own mini nation state, with its own currency, internal economy, etc. Which is a much more ambitious project.

This is an important choice. I’ve seen commiunities that kept a flat amount (e.g. SourceCred, MakerDAO), and communities that varied payouts based on Cred created (e.g. MetaGame). I wish I had hard data, analysis and conclusions. Alas, I do not. I will say that if payouts are kept flat, I have heard contributors say it introduces a ‘scarcity mindset’. For example, I’ve heard a couple people say they were more hesitant to invite people to a DAO because they didn’t want their payouts decreased. It seems to be less of an issue if the payouts are higher. Presumably because there is a limit to the number of people that want to, say, contribute to governance in a DeFi protocol. I personally don’t feel it alters my behavior either way, but that is partially due to privilege. I’ve heard similar attitudes from other people in the crypto space that I assume are comfortable financially (a common demographic in DAOs).

Varying payouts by Cred generated seems like the most sensible option to me generally. People object to it because they fear it will increase the incentive to game/farm. But from what I’ve observed (again, from a limited/biased perspective), I’ve not seen that fear justified in practice. It does however introduce variability into an org’s budgeting. Which could create additional governance labor if adjustments need to be made, and possibly execution risks due to less acurate financial modeling.

tldr; yes. Though unless DAOs can turn that growth into revenue (or some other form of sustainability), then Growth => run out of money faster.


To feed your curiosity :grinning:, the images made the words more potent, nothing like eye candy here.

1 Like

My bad, the word should be opt-in. That’s what I meant.


yes it should, but that’s if the system is properly managed to utilize all available opportunities like the constant inflow of contributors.


You welcome…I’m always ready to dedicate my time, energy and knowledge to the crypto world in general

This is a great post @s_ben I really agree with the point of maintaining the initial stake amount, I feel when the stakes get increased, a few gotten to be more dependent on the fund and thereby take it as probably a job, and then when there’s this type of mentality, exploitation, extortion and greed comes into play and desires can move. This could be moderated to an expand by clear communication and desire setting. But people are gonna human. Particularly on the off chance that deeply invested in set up frameworks or attempting to oust them. So I’d simply stick with the option of maintaining the initial stakes and reward for the main time since SCRF isn’t an employing company that’s mandated to pay workers salary.

@Freakytainment whatever has a positivity value must also be weighed with it’s negative side. When SCRF members are committed to the building and expansion of the system, they’d do it willing regardless of what ever is sent to their wallets. If there’s a continual increase in the stakes and the 5000 DAI is topped up everytime because there’s a constant inflow of members/contributors, there would also be a decline/crash in the system’s finance someday.
We should know that sustainability still does not fit flawlessly into the trade case. Companies have trouble segregating between the foremost imperative vital openings and dangers on the skyline.
A sustainable lifestyle is generally considered more expensive, unless everything is balanced and sustainability isn’t placed as a priority over other necessary aspects.


This is nice write up @s_ben , wish to give my iwn point of view. I think a reward formula, which is independent of the Cred scores, would be the most obvious technique to use. Numerous strategies could be used to establish income stability. For instance, if each person’s weekly Cred score remains over a predetermined threshold, everyone receives the same amount of DAI.

1 Like

@s_ben I am curious, especially considering other responses referencing sustainability and an increase in payout and contributors, did you notice and positive or negative change with the change/reduction in payout?

1 Like