Research Summary: Leveraging Blockchain for Greater Accessibility of Machine Learning Models

If I am not mistaken, the model that is undergoing training is the same model that tests if the data works.

This is the quote from the paper that lead me to this understanding.

Ongoing self-assessment: Participants effectively validate and pay each other for good data contributions. In such scenarios, an existing model already trained with some data is deployed. A contributor wishing to update the model submits data with features x, label y, and a deposit. After some predetermined time has passed, if the current model still agrees with the classification, then the contributor’s deposit is returned.

If this is the case, suppose someone submits a very large amount of false or attacked data, tricking the model to take their data as good and others bad. The attacker will reap huge rewards at the cost of the effectiveness of the model and trust in the community.

Sounds like a huge risk? How does the author address this issue?

5 Likes

I think you are misunderstanding. The “training set” is closed and is not open to the public. This is how the algorithm knows what is “good” or “bad”. The scenario you’re talking about is not possible due to not having the training part “open to the public”. If the algorithm is corrupted during the “training” process, then it would have never been accurate in the first place. This is why the research author is explicit about the training phase coming before the algorithm is released into the wild.

Someone would not be able to submit “fake good data” that tricks the algorithm once it’s trained. If you try to elaborate on what constitutes “fake good data” you just end up with “bad data”. This is a problem created from circular logic arising from misinterpreting the training stage.

What you’re saying is not actually a “risk” and in fact they duplicated this very scenario in the tests. That is stage 2 of the tests. They did exactly what you’re saying. They actively injected a ton of “bad data” into the algorithm. That came AFTER the training phase. It briefly made the algorithm less accurate, but because an attacker does NOT have unlimited funds, eventually good data rebalances the algorithm.

The notion of “tricking the algorithm” in the way you presented is literally how they tried to attack the model. An “abstract attack” does not have a legitimate threat vector until it is made into a tangible line of attack. Your abstract framing was realized into the simulated attack. Again, the attacker does NOT have unlimited funds. Thus an attacker may be able to affect the algorithm in the short term (as it was shown) but the “good data” that is submitted over time outweighs the “bad” data, thus keeping the AVERAGE accuracy above 79.5%.

“already trained with some data is deployed” this is a key part of the quote. Once “training” is closed, an algorithm is going to have a consensus set that is “right” that can be checked against and not corrupted. The “training” set stays closed while the algorithm learns based on new input. While the training model “updates itself” based on new input, the actual original training model is not “changed” by the new data. It is just added upon. Thus “new bad data” can’t corrupt the training algorithm. That’s why it happens in stages.

There is no such thing as “fake good data”. That is just misinterpreting “bad data”. Fake data is bad regardless of whether it “looks good” or not. It’s just “bad data”. I think that construct is what is throwing you off.

If someone had an “infinite supply of funds” from which they were drawing to inject “bad data” into the algorithm, then most definitely that actor could ruin the algorithm. Fortunately, someone with “infinite supply of funds” does not exist i.e. why that “abstract threat” is not a “real threat.”

3 Likes
  1. The initial model needs to be accurate to begin with. If that’s the scenario, wouldn’t the model author easily able to simulate & add in more training data on his own to improve the robustness of the model?
  2. If initial model is less accurate (highly biased model), wouldn’t this actually incentivize people to upload more biased training data?
  3. Is the whole system designed to “increase” the accuracy of the model or “maintain” the accuracy of the model? A model which actually “learns” should be able to classify more “ambiguous” samples correctly overtime. If initially the users are punished by providing ambiguous data, how do we improve the generalization of the model?
3 Likes

Thank you for the questions

  1. Under normal circumstances this is the case, but this is the point of making it an experiment about open-sourcing machine learning training.
  2. You have described an abstract concept without elaborating on what that means in this scenario. What would “highly biased” mean when the data is being pulled from a data set? How would you define that? Are you implying the researcher left data out?

What are you trying to define with “more biased training data”?

You have not given a tangible enough definition for this point to be more explored.

  1. It’s designed to maintain, and hopefully increase accuracy with the expectation that it can never attain 100% accuracy. There is no “ambiguous” in this model. There is only “good” or “bad”. There is no “ambiguous” data.

I think people are approaching the data like a human. Computers are not fooled by fake time-stamps or slightly modified data.

“Ambiguous” or “fake” is the same as “bad”.

3 Likes

Thank you for answering.

To add more context to point 2. A highly biased model means an under-trained model. For example, let’s say the model owner is only able to train with a limit amount of labeled data for his text classifier(he does not have access to any more data on his own), then the model will likely make poor prediction on new data (+/- sentiment). Then that same model will be used to punish the users who try to contribute data/label according to the wrong predicted label because they don’t want to be punished by going opposite of what the classifier “thinks” is right. Then overtime the model will be fed with more data with incorrect label and still thinks they are right. It will be a self-defeating model. Are the researcher exploring ways to make this task less exposed to the initial quality of the model? Or is this is not an issue at all?

Example:

  1. Under-trained text-classifier gets uploaded → 2. User contributes by supplying a piece of data and label: {“data”: “the movie was ok”, “label”: “Negative”} → 3. Model says it should be “Positive”, in reality it should actually be negative → 4. User changed the label accordingly to “Negative” to avoid punishment-> 5. Model keeps learning towards the wrong label overtime

Point 3 about the ambiguous data is somewhat related to point 2. I understand there is no “fake data” and that ML models can never achieve 100%, but say if I upload a piece of data and label: {“data”:”this movie was really on another level”, “label”:”Negative”) and the model gives 51% (low probability) to Positive because it’s really hard for it to tell if the text is sarcastic, then the user is forced to mark it as positive, then wouldn’t this hurt the actual use case (generalized model) in the real world? Would make sense if the framework can discard ingestion of data if its confidence in the provided label is below a certain threshold, thus maintaining the quality of the model overtime?

3 Likes

Thank you for taking the time to expand upon this subject. I am so glad that you did highlight this issue, because this is the difference between an abstract threat and a realized threat. What you just explained is in fact a real threat. This is not just a problem with machine learning, but any data analysis. The reason the recent Facebook study data sets were problematic is because Facebook did not provide researchers with all of the data that was available. This was likely an intentional attempt to weight the data towards a desired outcome instead of unbiased research.

An “Under-trained” model would in fact be a problem that would need to be addressed to ensure that an open-source ML algorithm would not have wildly inaccurate assessments. This is one of the reasons the researchers included the number of training sessions in their full explanation, but the maintained accuracy of the model in the results indicates the researchers did not undertrain the model. The fact that they actively injected bad data into the data set without corrupting the accuracy long-term is also an indication that their model was not under-trained.

In your final example, I think you have found a circular problem without knowing it. If a user is not “clear” about their feelings, it is ambiguous and thus there is no clear way to know if it is "positive or “negative”. This data is effectively “bad” data so it’s likely going to be thrown out. There is a threshold established which qualifies “good” vs. “bad”, and I believe you are asking about the movement of that threshold in real-time relative to ambiguously stated “sentiments”. If I am understanding you correctly, then someone who is not clear about their sentiment cannot have their sentiment “accurately” assessed because it’s “bad data” and would likely be thrown out.

I believe I understand what you are meaning by “ambiguous” now, and in this case “ambiguous” is “bad” because it is not “clear”. If there is no clear way to establish “positive” or “negative” sentiment about something based on the “ambiguous” language, that is deemed “bad” data and either counted as “bad” or thrown out altogether.

If that is the understanding, I apologize for misunderstanding the usage of “ambiguous” earlier. I can see how you were trying to ask one question, and it looked like another, and I was slightly misinterpreting your statement. Your clarification was very useful to get to a salient point, and your initial questions were valid. I just wanted to make sure that was not lost in text, as I did not want to come off as aggressively asking you to define “ambiguous” in this context. I hope you can understand now why I was slightly misinterpreting your question, but even further this thread would make a real-time example as to why ambiguously framed data usually does not get counted without further elaboration.

For the record, the data sets analyzed were binary response so it was effectively “like” or “not like” with no room for misinterpretation. However, if the data sets were free-response then the problem you raised would be more relevant. IMDB data sets were used to train their algorithm in addition to being the data sets from which the open-source analysis would come. The reason the researchers chose IMDB specifically is because their sentiment analysis is binary and not free response.

2 Likes

Thanks for trying to answer my questions. I appreciate that you took it seriously. Your discussion with xiaotoshi is also insightful. Here’s my follow up:


It is claimed that when the training set is not public, an attack is not possible. That is not obvious to me. To my knowledge, access to the dataset is not strictly necessary for a successful attack.

The breakthrough had been around since 2016.

The authors successfully attacked models hosted by Amazon and Google (without knowing what it is trained on), demonstrating how vulnerable seemingly powerful AIs could be.

As for the notion that there wouldn’t be enough funds, this is also addressed by another paper:

Our results are alarming: even on the state-of-the-art systems trained with massive parallel data (tens of millions), the attacks are still successful (over 50% success rate) under surprisingly low poisoning budgets (e.g., 0.006%).

This shows that data poisoning is achievable at a fraction of the whole pool.

Although this protocol makes attacks easier not only by letting anyone deposit data, but also provides a strong incentive for people to trick the model so that they could profit from not only the normal rate of reward but also the fees of other contributors.

Adversarial machine learning attacks are not the only threat in this protocol.

Another problem emerges from the risk of over-fitting.

For those who might be reading this comment be doesn’t know what over-fitting is in the AI/ML/DL context, here’s an analogy to put it plainly:

You let the students (AI model) practice the same test bank (dataset) over and over again (train many rounds).

Humans would gradually improve as they practice more. However, machines operate a little differently. It is observed that if trained too long on a dataset, it leads them to draw wrong inferences.

Take this for an example: when the machine is what the color of the flower is (and showed a picture of a sunflower), it can correctly answer “yellow”.

Although at surface answered correctly it was yellow, explainable AI analysis found that the machine just look for the word “flower” and answered “yellow”, not taking into the context of the question.

The model mislearns when they are trained on similar data, over and over.

So if the reward is based on whether the model deems the data correct, then the rational thing to do for someone seeking minimal cost and maximum returns, is to submit the same type of data repeatedly with only small modifications.

That would be unideal. This is exactly why high-quality data is so important to performance. Yet the protocol goes for quantity over quality.

4 Likes

Your observations about over-fitting and additionally poisoning the data are spot-on. The notion of poisoning data is a real threat that is taken seriously. In that regard, one of the most practical and widely implemented solutions to this problem is to have a machine learning/human hybrid quality-control combination at some point so that the ML is not completely automated. Additionally, this is where “supervised” vs. “unsupervised” machine learning becomes where the “poisoning” becomes less or more viable for an attacker in skewing the data analysis or skewing the model’s fit.

As a supervised ML algorithm will have its classifications and levels mostly set from the beginning, the issue of overfitting is not as much of a problem. Additionally, it’s easier to isolate false positives in the data when the classification levels are already defined. Conversely, unsupervised algorithms would be harder to manipulate in real-time in that it’s not entirely possible to know what classes have emerged unless one has access to both the data sets and the results of the algorithm’s output.

As the field of “Artificial Intelligence” emerged as a subset of computer science, “Hybrid Intelligence” looks to be emerging as a subset of the AI field.

Also, the “reward system” comes on a delayed schedule so that data can be analyzed for quality before payments were sent out. This makes it so that an attacker would not know if their attack was successful for an amount of time that makes it more difficult to launch a “successful” attack. The examples you posted don’t seem to have the monetary limitation on them which makes it possible that an attacker could spam a significant amount of data in a short period of time compared to the pool that is limited by submission fees.

The last scenario you presented would not really be profitable or likely even “feasible” for an attacker due to the cost as a barrier to entry. Effectively, if an attacker decides to spam this type of pool with “bad data” to make it look like “good data” so they can get a reward; the cost of attack starts to outweigh the benefits of the reward the closer the threshold of “poisoning” gets to 51% of the data set. That is to say, if it took an attacker in the worst-case scenario the need to submit 51% of the data set to corrupt it to then influence the data set so they would get the most reward, 49% of the pool still gets the other 50% of distributed rewards. The contributed data is not equivalent to the proportion of the reward, and the number of contributors to the pool is also a factor, i.e. why contributing 51% of data won’t result in receiving 51% of the rewards. In this scenario, an upper limit exists on what percentage of a pool can be taken through malicious attacks before the attacks become unprofitable or even have the potential to cause the attacker to incur a loss.

While an algorithm with no monetary fee may be able to be poisoned with a small amount of data, the same experiments would need to be run on ML algorithms that have fees associated with training. In this case, the experiment shows that the fees eventually stop the attackers from continuing to contribute. However, there obviously need to be more exhaustive attack scenarios explored before any definitive statements can be made about the benefits of this approach.

While there may be “some” incentive to try and game the system, there is not actually a “strong” incentive due to the nature of the pool’s distribution mechanism and the weighting of payments not being winner-takes-all. Albeit, the lines between “little”, “some”, and “strong,” incentive are the lines that the future research needs to determine.

3 Likes

The key point of this research seems to be the relationship of incentives/rewards/punishments to maintaining algorithmic accuracy. That makes sense to me.

I can see why “a small fee for contributing data combined with a reward mechanism for data that is validated as good” can lead to improved ML algorithmic accuracy.

However, the other point made in the research is about security, and that one seems less obvious to me.

Why does “including a smart contract” make “deploying machine learning training algorithms” more secure than “a purely open-source system which does not include incentive mechanisms in the design”? Where does the enhanced security come from?

3 Likes

It is not inherently a “smart contract” but a “smart contract with fees associated with it” that effectively limits an individual’s capacity to update a training algorithm. At the time of the publication, it was roughly $.40 per update. In the case of an open-source machine learning algorithm that has no smart-contract with fees associated with updating it, a nefarious attacker could run as many updates/attacks as time allows. This makes a “fee-based update system” inherently more secure than a system which does not require fees to make updates. The smart contract itself is not arbitrary, but is part of the mechanism managing the fee acceptance and redistribution. In this case, it’s not the “smart contract” necessarily, but the “rules by which the contract is operating” that incur a fee to update the learning algo, while then rewarding users that have submitted data that was determined to be “good”. The fees effectively limit an attacker’s capacity, as there will be no attacker with “unlimited funds” and thus in a fee-based system “unending attacks” cannot occur in the same manner in which they can occur in a fee-less open-source system.

I will give a further example: if the rules of the smart contract were poorly written for example an extremely short reward evaluation period and additionally giving rewards purely for “data contribution” and not “good data contribution”, the algorithm would likely skew towards inaccurate noise data from attackers as being the most influential.

Since the smart contract’s rules were written to reward “good” data with a long enough reward period to give the system time to validate if the data was in fact “good”, the results skewed towards maintaining an accurate model. The rules of the smart contract will ultimately skew the data in a direction, but in this case the rules skewed the data towards a net positive outcome concerning data accuracy.

2 Likes

Speaking about security, asking for an initial deposit makes a lot of sense to me. The current summary I am working on quoted another paper that uses something similar to prevent a DOS attack. They introduced an initial fee that will be returned only after the protocol was successfully launched. This discouraged malicious participants from trying to take over.

4 Likes

Hey, I’m the main researcher working on this. Good questions. The words “secure” and “security” are very special words with deep meanings to me. I use them very carefully because they compose many things. Unless my Ctrl+F is failing me, I don’t see them use in this article and they’re not used in the original paper either. I see the term is used in the blog post (emphasis mine):

Leveraging blockchain technology allows us to do two things that are integral to the success of the framework: offer participants a level of trust and security and reliably execute an incentive-based system to encourage participants to contribute data that will help improve a model’s performance.

Here the security is about transparency. Open-source code is great but it’s not sufficient. For example, even if the back end code for this website was open-source, I don’t know what really happens when I enter my comment into this box because I can’t verify that the code running on back end machines is the same as what is in the open-source code that I read. Smart contracts offer a stronger guarantee that the code we see is the code that runs.

5 Likes

@juharris That’s a good answer, Justin. Not knowing what code is “actually running” on servers, and the fact that smart contracts make it “more knowable,” are huge points.

I wrote my question a month ago, and I don’t frankly remember where I was quoting from, but it must have included the idea of “security” or I wouldn’t have quoted it that way. It may have been in the blog post which you cited.

Thanks very much for your response.

**Edit: Actually, I just looked, and the language I quoted was in fact from this article under your byline, which I took not as a “blog post” but as the original research article being summarized above. That doesn’t invalidate anything you say in your response, but for accuracy’s sake I wanted to note it.

2 Likes

For the record, I summarized the recent blog post which summarized the previous articles and works that were published into a more concise explanation of the experiment. The original work from the IEEE conference was referenced in the background section:

" The original framework was proposed in 2019 at an IEEE conference on Blockchain"

No problem. The word “security” was used in some comments.

1 Like

It’s really great to see the insightful discussion here. It seems like most questions are answered well or at least addressed but I’m happy to clarify more. We also have an FAQ here: GitHub - microsoft/0xDeCA10B: Sharing Updatable Models (SUM) on Blockchain

A key assumption in the system, is that users should monitor a proxy for the model’s accuracy. Hence the plot showing the accuracy over time: 0xDeCA10B/simulation at main · microsoft/0xDeCA10B · GitHub. It’s crucial for users to track the model’s performance before contributing to it, just like how you would check previous trade prices, volume, and other metrics for a stock before buying it. Just like stocks have many sites and analysts, we envision that shared models can have many competing monitoring services to let users know if the model is corrupt and what type of data it works best with.

We want to emphasize that the research is still exploratory. We proposed a framework, a baseline, that we hope others will expand. We showed that it’s possible to easily share updatable models in a decentralized way, whether you should and how, depends on each specific scenario. The simulation tools should help you determine if your model is ready to be decentralized.

2 Likes

Thank you so kindly for taking the time to respond to the thread! In an attempt to get more discussion going about this topic, I took the liberty of responding to the questions that had been posed not knowing that you would join the thread. Please let me know if I did not represent anything accurately or if there is anything that could be better explained, as I will defer to you as the author.

3 Likes

I saw some of your answers and I think you did a great job clarifying and referring to ideas from the original paper. Thanks!

2 Likes

Thank you both, Larry and Justin, for providing such elaborate explanations for each query posted.
Just to summarize, I believe we have multiple questions posted on “good”, “bad”, and rather notorious “ambiguous” data. Perhaps the sentiment analysis doesn’t suffer much from ambiguity (although I would disagree), maybe an image classification problem pertaining to a specific scientific domain such as classification of different kinds of steels would (my research domain, which keeps me up at night :slight_smile: ). Since research work is all about finding & explaining edge / ambiguous cases, maybe we don’t want to classify them as “bad”. I would love to hear your thoughts on “what would you add” to existing model (qualitative description would suffice). I’m asking, because, I want to pursue this in the next quarter, and any comments would greatly help. And open to collaborate as well. thanks again, Amit

4 Likes

Wow, that’s a lot to digest with the many perspectives participants in this thread have already contributed. Contributing to a compendium of datasets is a good idea, however, whether a machine learning model is able to be trained based on more signal or noise is not an issue of the volume or quality of the data available. I would propose that is more of a problem of “missing feature values”.

The paper “Data Preprocessing For Supervised Learning”[1] illustrates this problem,

In many applications learners are faced with imbalanced data sets, which can cause the learners to be biased towards one class. This bias is the result of one class being heavily under-represented in the training data compared to other classes… Classes containing few examples can be largely ignored by learning algorithms because the cost of performing well on the overrepresented class outweighs the cost of doing poorly on the smaller class. Another factor contributing to bias is overfitting. Overfitting occurs when a learning algorithm creates a hypothesis that performs well over unseen data. This can occur on an underrepresented class because the learning algorithm creates a hypothesis that can easily fit a small number of examples, but fits them too specifically.

In real-world application scenarios, data sets are rarely ever balanced which can give the correct signals for any machine learning model to pick up on for a “proper” fit. That is the reason why thorough data mining for feature engineering is oftentimes more important than the fitting of the machine learning model itself. “Data Preprocessing For Supervised Learning” also listed several conventional data preprocessing techniques to tackle this issue. In the context of data mining techniques, it is also important to note that without a rigorous empirical background, data mining can also oftentimes be equivalent to “data snooping”[2]. For example, how an amateur practitioner who does not have the capability to differentiate between spontaneous and explainable results, utilizes specific data points for inferences or model selection without any prior knowledge of what the significance of the specific data point is.

“data snooping” is widely acknowledged to be a dangerous practice because it alters the asymptotic distribution of the null hypothesis in statistical studies.[2] However, since any end performance metric of a model can only be determined by the incumbent “testing data set” and the feature labels that are represented in the “testing data set”, I do not think labeling whether someone has contributed “good” or “bad” data be the subject of discussion.

Andrew ng has recently started to promote the idea of smart sized, “data centric” AI rather than the conventional “Big data” approach. Here’s a related article from [spectrum.ieee.org].(Andrew Ng: Unbiggen AI - IEEE Spectrum). In light of this, it may just be the implication that more empirical studies need to be performed by researchers rather than having the lot of commercial practitioners throw the “train test split” at an algorithm and wonder why the model is not outputting “appropriate” estimates. I would also suggest these statistical implications also be integrated into the heuristical aggregations of the smart contract of this framework

[1]Kotsiantis, S.B. et al. “Data Preprocessing For Supervised Learning”. Citeseerx.Ist.Psu.Edu, 2021, http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.104.8413&rep=rep1&type=pdf.

[2]White, Halbert. “A Reality Check for Data Snooping.” Econometrica, vol. 68, no. 5, [Wiley, Econometric Society], 2000, pp. 1097–126, http://www.jstor.org/stable/2999444.

3 Likes