Research Summary: Leveraging Blockchain for Greater Accessibility of Machine Learning Models

TLDR

  • The researcher proposes a framework for open-access decentralized machine learning algorithms that have shared databases and frameworks without centralizing data processing mechanisms.
  • He proposes a collaborative trainer, an incentive mechanism, and an additional automated data handler that can be stacked to create a superior final model for a machine learning process.
  • The researcher suggests that by creating blockchain-based incentives for contributing to data sets, it may be possible to improve the public’s access to high-quality data for improved machine learning

Core Research Question

Can a model for building decentralized machine learning (ML) algorithms that can help accelerate the evolution of AI be designed? What type of model for building decentralized ML algorithms would democratize artificial intelligence (AI), keep costs low, and keep its data updated and relevant?

Citation

Microsoft, J. D. H., Senior Software Developer at. (2021). Leveraging Blockchain for Greater Accessibility of Machine Learning Models. Stanford Journal of Blockchain Law & Policy. Retrieved from https://stanford-jblp.pubpub.org/pub/blockchain-machine-learning

Background

  • The researcher proposes an open-access machine learning framework in which collaborators are rewarded for contributing data that is deemed “good” whereas those who have contributed data deemed “bad” lose their contribution fee.
  • The researcher suggests that including a smart contract in the process of deploying machine learning training algorithms could be more secure than a purely open-source system which does not include incentive mechanisms in the design.
  • He suggests that a small fee for contributing data combined with a reward mechanism for data that is validated as good can lead to improved ML algorithmic accuracy.
  • The researcher uses the Internet Movie Data Base (IMDB) as a test set from which to assess the data and classify sentiment.
  • The original framework was proposed in 2019 at an IEEE conference on Blockchain
  • Perceptron: a linear machine learning algorithm used for binary classification (i.e. whether a film was given a positive or negative review). The model uses linear and logistic regression to take weighted data and break it into two classes.
  • IMDB Sentiment Classification Data set: A list of 25,000 movies labeled by sentiment (positive/negative) IMDB movie review sentiment classification dataset (keras.io)
  • Demo of Incentive Mechanism: Decentralized & Collaborative AI on Blockchain Setup + Demo - YouTube

Summary

Adding data to a model in the Decentralized & Collaborative AI on Blockchain framework consists of three steps: (1) The incentive mechanism, designed to encourage the distribution of “good” data, validates the transaction, for instance, requiring a “stake” or monetary deposit. (2) The data handler stores data and metadata onto the blockchain. (3) The machine learning model is updated.

Method

  • A Perceptron (an an algorithm used for supervised learning of binary classifiers) is used to train a model on IMDB data for sentiment classification
  • Approximately 8,000 training samples are used in the first simulation and 33,000 total training samples are used in the second simulation

Results

  • The first simulation is conducted with the condition that users contributing data that is confirmed to be “bad” lose their submission fee.
    • Lost submission fees are split among the contributors of “good data”
    • The balance represents starting funds contributed to the pool to be able to update the algorithm.
    • The initial decline in balances represents the accumulated fees being paid into the pool, while the uptick in good balances is the accumulation of rewards from bad submissions paid out to users that submit good updates to the algorithm.
  • In this simulation, a contributor that has been found to have made a “good” contribution has their fee returned and a point added to their reputation score.
  • This simulation shows a positive increase in the accuracy of the sentiment analysis over time.

  • The second simulation adds 25,000 more training samples to the data set
  • The second simulation operates under the premise that a nefarious actor is intentionally trying to add “bad” data to the test sets actively paying for the attack
  • Even under the condition that an attacker is willing to pay to inject “bad” data, the “good” data contributions offsets the attacker and the accuracy of the ML algorithm does not drop.
  • The tests took place in August of 2020; the researcher estimated the cost of each update to the algorithm to be roughly $.40 USD.

Discussion and Key Takeaways

  • The researcher suggests that open-access machine learning on blockchains have great potential for incentive mechanisms to improve the efficiency of training algorithms
  • The researcher also suggests that the costs associated with updating the algorithms may be used to dissuade nefarious attackers and may eventually stop attackers when they run out of money to attack a specific object.

Implications and Follow-ups

  • Based on a rough estimate of price change since the publication of the article, the current cost of updating the training algorithm is approximately $2.88 USD under the assumption that the researcher is unsuccessful in decreasing the average cost of pushing an update
  • The researchers address frequently asked questions about their publication at the following link: https://github.com/microsoft/0xDeCA10B/blob/main/README.md#faqconcerns

Applicability

  • Open-access machine learning algorithms may be beneficial for training AI assistants more quickly.
  • AI depends on machine learning algorithms, and more accessible algorithms would translate into AI that learns more quickly.
  • The nature of ML algorithms depends on perpetual training, which is partially the reason why the researcher looks for methods of making the long-term updating of a training algorithm as inexpensive as possible for contributors with a history of “good” data training submissions
9 Likes

Great read. Thank you for contributing.
I noticed that you specified that the inclusion of incentives will help the public have better access to data that would help develop better ML practices. Assuming the members of the community that are attempting to improve the public’s access to “good” data are all benevolent, the incentives concept seems to have a lot of promise. However, is there a chance that someone could maliciously provide data that is meant to seem “good” but is fundamentally incorrect?

If so, what would be some potential vetting mechanisms to ensure the data is valid for the purpose of furthering the efficacy of ML models?

If not, what aspect to this proposed system would eliminate faked “good” data?

I suppose this question is coming from the understanding that on some platforms bots can disproportionately affect voting and wrongly inflate the value of information.

3 Likes

Thanks for the questions!

The researcher actually did a model in which they attempted to inject “bad” data. Ultimately, since the malicious actor did not receive a reward, the “good” actors inevitably outweigh the bad actor and the accuracy is not affected negatively long-term.

The second simulation that is posted addresses that specific question. Effectively, a nefarious actor would have to have “unlimited funds” to outweigh a group of good actors. A nefarious actor can cause a slight dip in accuracy, but over time the good actors will receive their fees back and continue to contribute in contrast to the bad actors eventually running out of funds.

“Faked good data” is just “bad data”. There is no qualitative difference, as a system that is designed to tell “good” from “bad” data will not be able to be “tricked,” i.e. how it is able to determine what is “bad data”.

“How would one determine what is good and what is bad?”

Consensus.

There will be a point at which a certain percentage of nodes agree on the information as being “good” because they can check it against previous data sets. When a model is being trained initially, that is the point at which “good” and “bad” will be determined. Anything entered into the system will either be “good” or “bad”. There is no “fake good” data. That’s simply “bad data”.

For example, if a node tried to inject data that was qualitatively identical “good data”, but was coming from an “untrusted node” the data would be classified as “bad data” due to the node not being trustworthy even though the “data” itself looked identical to “good data”. That’s still just “bad data”, due to the fact that it’s coming from a node that has been compromised.

In this context, “bots” would still be limited by the funds allowing them to update a database, and thus the injections of bad data would still be limited by the fund pool from which the malevolent actor is drawing their capital.

6 Likes

@Larry_Bates did you get any sense of how effective these incentive mechanisms are when compared to other ones that the industry uses? I know that for years the Internet Movie Database had a contest for improving its algorithms (and Alexa does a similar thing with its algorithm, only they focus on college teams to create the algorithms). Would a prize mechanism work for cleaning, organizing and adding to databases? Or would is it more effective to have a smart-contract based vetting system in place?

2 Likes

Considering the niche demographic, I am not sure adding a prize incentive would necessarily increase the pool of contributors to the algorithm. In this case, I believe a vetting system would be more appropriate as a means of ensuring that no nefarious actors were getting into the system. In this case, the incentive mechanism seems to serve as a means of keeping contributors, rather than getting them to start contributing. As contributing data to a perceptron is not a low-level activity, there is an expectation of some level of knowledge that has to be present for a person to contribute. As that is the case, there is nothing to indicate that people who contribute to open-source machine learning algorithms are any more motivated by the incentive mechanism. However, the data DOES show that the incentive mechanism can in fact prevent nefarious actors from continuing to contribute by taking their submission fees and giving it to a good actor.

While it is called an “incentive mechanism” it serves more to discourage nefarious activity than purely to serve as an incentive mechanism.

There is a positive reward AND simultaneously a negative punishment. In that, a positive reward is rewarding someone by giving them something, whereas a negative punishment is a punishment that works by taking something away. You can have a “positive punishment” which works by adding something unpleasant. You can also have a “negative reward” by taking something unpleasant away. In this case, the mechanism employs a positive reward and a negative punishment.

3 Likes

If I am not mistaken, the model that is undergoing training is the same model that tests if the data works.

This is the quote from the paper that lead me to this understanding.

Ongoing self-assessment: Participants effectively validate and pay each other for good data contributions. In such scenarios, an existing model already trained with some data is deployed. A contributor wishing to update the model submits data with features x, label y, and a deposit. After some predetermined time has passed, if the current model still agrees with the classification, then the contributor’s deposit is returned.

If this is the case, suppose someone submits a very large amount of false or attacked data, tricking the model to take their data as good and others bad. The attacker will reap huge rewards at the cost of the effectiveness of the model and trust in the community.

Sounds like a huge risk? How does the author address this issue?

5 Likes

I think you are misunderstanding. The “training set” is closed and is not open to the public. This is how the algorithm knows what is “good” or “bad”. The scenario you’re talking about is not possible due to not having the training part “open to the public”. If the algorithm is corrupted during the “training” process, then it would have never been accurate in the first place. This is why the research author is explicit about the training phase coming before the algorithm is released into the wild.

Someone would not be able to submit “fake good data” that tricks the algorithm once it’s trained. If you try to elaborate on what constitutes “fake good data” you just end up with “bad data”. This is a problem created from circular logic arising from misinterpreting the training stage.

What you’re saying is not actually a “risk” and in fact they duplicated this very scenario in the tests. That is stage 2 of the tests. They did exactly what you’re saying. They actively injected a ton of “bad data” into the algorithm. That came AFTER the training phase. It briefly made the algorithm less accurate, but because an attacker does NOT have unlimited funds, eventually good data rebalances the algorithm.

The notion of “tricking the algorithm” in the way you presented is literally how they tried to attack the model. An “abstract attack” does not have a legitimate threat vector until it is made into a tangible line of attack. Your abstract framing was realized into the simulated attack. Again, the attacker does NOT have unlimited funds. Thus an attacker may be able to affect the algorithm in the short term (as it was shown) but the “good data” that is submitted over time outweighs the “bad” data, thus keeping the AVERAGE accuracy above 79.5%.

“already trained with some data is deployed” this is a key part of the quote. Once “training” is closed, an algorithm is going to have a consensus set that is “right” that can be checked against and not corrupted. The “training” set stays closed while the algorithm learns based on new input. While the training model “updates itself” based on new input, the actual original training model is not “changed” by the new data. It is just added upon. Thus “new bad data” can’t corrupt the training algorithm. That’s why it happens in stages.

There is no such thing as “fake good data”. That is just misinterpreting “bad data”. Fake data is bad regardless of whether it “looks good” or not. It’s just “bad data”. I think that construct is what is throwing you off.

If someone had an “infinite supply of funds” from which they were drawing to inject “bad data” into the algorithm, then most definitely that actor could ruin the algorithm. Fortunately, someone with “infinite supply of funds” does not exist i.e. why that “abstract threat” is not a “real threat.”

3 Likes
  1. The initial model needs to be accurate to begin with. If that’s the scenario, wouldn’t the model author easily able to simulate & add in more training data on his own to improve the robustness of the model?
  2. If initial model is less accurate (highly biased model), wouldn’t this actually incentivize people to upload more biased training data?
  3. Is the whole system designed to “increase” the accuracy of the model or “maintain” the accuracy of the model? A model which actually “learns” should be able to classify more “ambiguous” samples correctly overtime. If initially the users are punished by providing ambiguous data, how do we improve the generalization of the model?
4 Likes

Thank you for the questions

  1. Under normal circumstances this is the case, but this is the point of making it an experiment about open-sourcing machine learning training.
  2. You have described an abstract concept without elaborating on what that means in this scenario. What would “highly biased” mean when the data is being pulled from a data set? How would you define that? Are you implying the researcher left data out?

What are you trying to define with “more biased training data”?

You have not given a tangible enough definition for this point to be more explored.

  1. It’s designed to maintain, and hopefully increase accuracy with the expectation that it can never attain 100% accuracy. There is no “ambiguous” in this model. There is only “good” or “bad”. There is no “ambiguous” data.

I think people are approaching the data like a human. Computers are not fooled by fake time-stamps or slightly modified data.

“Ambiguous” or “fake” is the same as “bad”.

3 Likes

Thank you for answering.

To add more context to point 2. A highly biased model means an under-trained model. For example, let’s say the model owner is only able to train with a limit amount of labeled data for his text classifier(he does not have access to any more data on his own), then the model will likely make poor prediction on new data (+/- sentiment). Then that same model will be used to punish the users who try to contribute data/label according to the wrong predicted label because they don’t want to be punished by going opposite of what the classifier “thinks” is right. Then overtime the model will be fed with more data with incorrect label and still thinks they are right. It will be a self-defeating model. Are the researcher exploring ways to make this task less exposed to the initial quality of the model? Or is this is not an issue at all?

Example:

  1. Under-trained text-classifier gets uploaded → 2. User contributes by supplying a piece of data and label: {“data”: “the movie was ok”, “label”: “Negative”} → 3. Model says it should be “Positive”, in reality it should actually be negative → 4. User changed the label accordingly to “Negative” to avoid punishment-> 5. Model keeps learning towards the wrong label overtime

Point 3 about the ambiguous data is somewhat related to point 2. I understand there is no “fake data” and that ML models can never achieve 100%, but say if I upload a piece of data and label: {“data”:”this movie was really on another level”, “label”:”Negative”) and the model gives 51% (low probability) to Positive because it’s really hard for it to tell if the text is sarcastic, then the user is forced to mark it as positive, then wouldn’t this hurt the actual use case (generalized model) in the real world? Would make sense if the framework can discard ingestion of data if its confidence in the provided label is below a certain threshold, thus maintaining the quality of the model overtime?

3 Likes

Thank you for taking the time to expand upon this subject. I am so glad that you did highlight this issue, because this is the difference between an abstract threat and a realized threat. What you just explained is in fact a real threat. This is not just a problem with machine learning, but any data analysis. The reason the recent Facebook study data sets were problematic is because Facebook did not provide researchers with all of the data that was available. This was likely an intentional attempt to weight the data towards a desired outcome instead of unbiased research.

An “Under-trained” model would in fact be a problem that would need to be addressed to ensure that an open-source ML algorithm would not have wildly inaccurate assessments. This is one of the reasons the researchers included the number of training sessions in their full explanation, but the maintained accuracy of the model in the results indicates the researchers did not undertrain the model. The fact that they actively injected bad data into the data set without corrupting the accuracy long-term is also an indication that their model was not under-trained.

In your final example, I think you have found a circular problem without knowing it. If a user is not “clear” about their feelings, it is ambiguous and thus there is no clear way to know if it is "positive or “negative”. This data is effectively “bad” data so it’s likely going to be thrown out. There is a threshold established which qualifies “good” vs. “bad”, and I believe you are asking about the movement of that threshold in real-time relative to ambiguously stated “sentiments”. If I am understanding you correctly, then someone who is not clear about their sentiment cannot have their sentiment “accurately” assessed because it’s “bad data” and would likely be thrown out.

I believe I understand what you are meaning by “ambiguous” now, and in this case “ambiguous” is “bad” because it is not “clear”. If there is no clear way to establish “positive” or “negative” sentiment about something based on the “ambiguous” language, that is deemed “bad” data and either counted as “bad” or thrown out altogether.

If that is the understanding, I apologize for misunderstanding the usage of “ambiguous” earlier. I can see how you were trying to ask one question, and it looked like another, and I was slightly misinterpreting your statement. Your clarification was very useful to get to a salient point, and your initial questions were valid. I just wanted to make sure that was not lost in text, as I did not want to come off as aggressively asking you to define “ambiguous” in this context. I hope you can understand now why I was slightly misinterpreting your question, but even further this thread would make a real-time example as to why ambiguously framed data usually does not get counted without further elaboration.

For the record, the data sets analyzed were binary response so it was effectively “like” or “not like” with no room for misinterpretation. However, if the data sets were free-response then the problem you raised would be more relevant. IMDB data sets were used to train their algorithm in addition to being the data sets from which the open-source analysis would come. The reason the researchers chose IMDB specifically is because their sentiment analysis is binary and not free response.

2 Likes

Thanks for trying to answer my questions. I appreciate that you took it seriously. Your discussion with xiaotoshi is also insightful. Here’s my follow up:

–
It is claimed that when the training set is not public, an attack is not possible. That is not obvious to me. To my knowledge, access to the dataset is not strictly necessary for a successful attack.

The breakthrough had been around since 2016.

The authors successfully attacked models hosted by Amazon and Google (without knowing what it is trained on), demonstrating how vulnerable seemingly powerful AIs could be.

As for the notion that there wouldn’t be enough funds, this is also addressed by another paper:

Our results are alarming: even on the state-of-the-art systems trained with massive parallel data (tens of millions), the attacks are still successful (over 50% success rate) under surprisingly low poisoning budgets (e.g., 0.006%).

This shows that data poisoning is achievable at a fraction of the whole pool.

Although this protocol makes attacks easier not only by letting anyone deposit data, but also provides a strong incentive for people to trick the model so that they could profit from not only the normal rate of reward but also the fees of other contributors.

–

Adversarial machine learning attacks are not the only threat in this protocol.

Another problem emerges from the risk of over-fitting.

For those who might be reading this comment be doesn’t know what over-fitting is in the AI/ML/DL context, here’s an analogy to put it plainly:

You let the students (AI model) practice the same test bank (dataset) over and over again (train many rounds).

Humans would gradually improve as they practice more. However, machines operate a little differently. It is observed that if trained too long on a dataset, it leads them to draw wrong inferences.

Take this for an example: when the machine is what the color of the flower is (and showed a picture of a sunflower), it can correctly answer “yellow”.

Although at surface answered correctly it was yellow, explainable AI analysis found that the machine just look for the word “flower” and answered “yellow”, not taking into the context of the question.

The model mislearns when they are trained on similar data, over and over.

So if the reward is based on whether the model deems the data correct, then the rational thing to do for someone seeking minimal cost and maximum returns, is to submit the same type of data repeatedly with only small modifications.

That would be unideal. This is exactly why high-quality data is so important to performance. Yet the protocol goes for quantity over quality.

4 Likes

Your observations about over-fitting and additionally poisoning the data are spot-on. The notion of poisoning data is a real threat that is taken seriously. In that regard, one of the most practical and widely implemented solutions to this problem is to have a machine learning/human hybrid quality-control combination at some point so that the ML is not completely automated. Additionally, this is where “supervised” vs. “unsupervised” machine learning becomes where the “poisoning” becomes less or more viable for an attacker in skewing the data analysis or skewing the model’s fit.

As a supervised ML algorithm will have its classifications and levels mostly set from the beginning, the issue of overfitting is not as much of a problem. Additionally, it’s easier to isolate false positives in the data when the classification levels are already defined. Conversely, unsupervised algorithms would be harder to manipulate in real-time in that it’s not entirely possible to know what classes have emerged unless one has access to both the data sets and the results of the algorithm’s output.

As the field of “Artificial Intelligence” emerged as a subset of computer science, “Hybrid Intelligence” looks to be emerging as a subset of the AI field.

Also, the “reward system” comes on a delayed schedule so that data can be analyzed for quality before payments were sent out. This makes it so that an attacker would not know if their attack was successful for an amount of time that makes it more difficult to launch a “successful” attack. The examples you posted don’t seem to have the monetary limitation on them which makes it possible that an attacker could spam a significant amount of data in a short period of time compared to the pool that is limited by submission fees.

The last scenario you presented would not really be profitable or likely even “feasible” for an attacker due to the cost as a barrier to entry. Effectively, if an attacker decides to spam this type of pool with “bad data” to make it look like “good data” so they can get a reward; the cost of attack starts to outweigh the benefits of the reward the closer the threshold of “poisoning” gets to 51% of the data set. That is to say, if it took an attacker in the worst-case scenario the need to submit 51% of the data set to corrupt it to then influence the data set so they would get the most reward, 49% of the pool still gets the other 50% of distributed rewards. The contributed data is not equivalent to the proportion of the reward, and the number of contributors to the pool is also a factor, i.e. why contributing 51% of data won’t result in receiving 51% of the rewards. In this scenario, an upper limit exists on what percentage of a pool can be taken through malicious attacks before the attacks become unprofitable or even have the potential to cause the attacker to incur a loss.

While an algorithm with no monetary fee may be able to be poisoned with a small amount of data, the same experiments would need to be run on ML algorithms that have fees associated with training. In this case, the experiment shows that the fees eventually stop the attackers from continuing to contribute. However, there obviously need to be more exhaustive attack scenarios explored before any definitive statements can be made about the benefits of this approach.

While there may be “some” incentive to try and game the system, there is not actually a “strong” incentive due to the nature of the pool’s distribution mechanism and the weighting of payments not being winner-takes-all. Albeit, the lines between “little”, “some”, and “strong,” incentive are the lines that the future research needs to determine.

3 Likes

The key point of this research seems to be the relationship of incentives/rewards/punishments to maintaining algorithmic accuracy. That makes sense to me.

I can see why “a small fee for contributing data combined with a reward mechanism for data that is validated as good” can lead to improved ML algorithmic accuracy.

However, the other point made in the research is about security, and that one seems less obvious to me.

Why does “including a smart contract” make “deploying machine learning training algorithms” more secure than “a purely open-source system which does not include incentive mechanisms in the design”? Where does the enhanced security come from?

3 Likes

It is not inherently a “smart contract” but a “smart contract with fees associated with it” that effectively limits an individual’s capacity to update a training algorithm. At the time of the publication, it was roughly $.40 per update. In the case of an open-source machine learning algorithm that has no smart-contract with fees associated with updating it, a nefarious attacker could run as many updates/attacks as time allows. This makes a “fee-based update system” inherently more secure than a system which does not require fees to make updates. The smart contract itself is not arbitrary, but is part of the mechanism managing the fee acceptance and redistribution. In this case, it’s not the “smart contract” necessarily, but the “rules by which the contract is operating” that incur a fee to update the learning algo, while then rewarding users that have submitted data that was determined to be “good”. The fees effectively limit an attacker’s capacity, as there will be no attacker with “unlimited funds” and thus in a fee-based system “unending attacks” cannot occur in the same manner in which they can occur in a fee-less open-source system.

I will give a further example: if the rules of the smart contract were poorly written for example an extremely short reward evaluation period and additionally giving rewards purely for “data contribution” and not “good data contribution”, the algorithm would likely skew towards inaccurate noise data from attackers as being the most influential.

Since the smart contract’s rules were written to reward “good” data with a long enough reward period to give the system time to validate if the data was in fact “good”, the results skewed towards maintaining an accurate model. The rules of the smart contract will ultimately skew the data in a direction, but in this case the rules skewed the data towards a net positive outcome concerning data accuracy.

2 Likes

Speaking about security, asking for an initial deposit makes a lot of sense to me. The current summary I am working on quoted another paper that uses something similar to prevent a DOS attack. They introduced an initial fee that will be returned only after the protocol was successfully launched. This discouraged malicious participants from trying to take over.

5 Likes

Hey, I’m the main researcher working on this. Good questions. The words “secure” and “security” are very special words with deep meanings to me. I use them very carefully because they compose many things. Unless my Ctrl+F is failing me, I don’t see them use in this article and they’re not used in the original paper either. I see the term is used in the blog post (emphasis mine):

Leveraging blockchain technology allows us to do two things that are integral to the success of the framework: offer participants a level of trust and security and reliably execute an incentive-based system to encourage participants to contribute data that will help improve a model’s performance.

Here the security is about transparency. Open-source code is great but it’s not sufficient. For example, even if the back end code for this website was open-source, I don’t know what really happens when I enter my comment into this box because I can’t verify that the code running on back end machines is the same as what is in the open-source code that I read. Smart contracts offer a stronger guarantee that the code we see is the code that runs.

5 Likes

@juharris That’s a good answer, Justin. Not knowing what code is “actually running” on servers, and the fact that smart contracts make it “more knowable,” are huge points.

I wrote my question a month ago, and I don’t frankly remember where I was quoting from, but it must have included the idea of “security” or I wouldn’t have quoted it that way. It may have been in the blog post which you cited.

Thanks very much for your response.

**Edit: Actually, I just looked, and the language I quoted was in fact from this article under your byline, which I took not as a “blog post” but as the original research article being summarized above. That doesn’t invalidate anything you say in your response, but for accuracy’s sake I wanted to note it.

2 Likes

For the record, I summarized the recent blog post which summarized the previous articles and works that were published into a more concise explanation of the experiment. The original work from the IEEE conference was referenced in the background section:

" The original framework was proposed in 2019 at an IEEE conference on Blockchain"

No problem. The word “security” was used in some comments.

1 Like