A Survey on Ethereum Systems Security: Vulnerabilities, Attacks, and Defenses


This was the first holistic survey of Ethereum security. Researchers identified 44 vulnerabilities, 26 attacks and 47 proposed defense mechanisms. Key takeaways: the application layer is especially vulnerable given the size of losses taken; smart contract developers need better, more secure languages and tools; existing studies on DApps ignore the frontend, and interactions between the backend and front.


  • Chen, Huashan and Pendleton, Marcus and Njilla, Laurent and Xu, Shouhuai. 2020. A Survey on Ethereum Systems Security: Vulnerabilities, Attacks, and Defenses. In ACM Computing Surveys 53, 3, Article 67, DOI: https://doi.org/10.1145/3391195


Core Research Question

  • What are the vulnerabilities across the different layers of Ethereum, the attacks exploiting them, and possible defense mechanisms?


  • Ethereum has a 4-layer architecture. On top is the application layer, comprising users accounts, smart contracts, and the EVM (the Ethereum Virtual Machine). Directly below is the data layer, which stores the state of the blockchain (mined transactions, blocks, and events). Next, the consensus layer defines what the current state should be. At the bottom of the stack is the network layer, comprising the P2P protocol that nodes abide by when communicating with one another.
  • An external environment, in turn, interfaces and supports different parts of each layer. Environment-related components include internet infrastructure (e.g., supports the network layer), a database (e.g., supports the data layer), cryptography libraries (e.g., supports the consensus layer), and UIs (e.g., supports the application layer).


  • The paper presents a survey identifying: a) 44 vulnerabilities split across the 4 layers of the Ethereum architecture and its supporting environment; b) how these vulnerabilities can lead to 26 different attacks; and c) 47 defense mechanisms that could be used to prevent these attacks from happening.


  • Survey


  • A catalog of 44 vulnerability types and their root causes—see Fig. 1. Vulnerability types are organized in terms where they occur, their causes, and their status. The latter signals whether a vulnerability type is eliminated, i.e., it no longer applies (shown with a filled square in the figure), whether it can be eliminated by a best practice (half filled square), or is yet to be eliminated (unfilled square). Following the survey, most vulnerabilities were found in the application layer (26), generally associated with issues in smart contract programing, including the underlying programming language and toolchain; the consensus, network, and environment layers had less known vulnerability types, but similarly distributed (between 5 and 6); the outlier was the data layer, where only 2 vulnerability types were identified—all with “eliminated” status.
  • A catalog of 26 attacks exploiting the different vulnerabilities types—see Fig. 2.
  • Identification of 47 proposed defense mechanisms, categorized into proactive (attempts to prevent an attack prior to contract deployment) and reactive (deals with the attack after a contract deployment). Since most of the identified vulnerability types concern the application layer, specifically smart contracts, most of the proactive defense mechanisms are geared towards smart contract development. This includes making use of guiding principles (such as those in Fig. 3), smart contract languages with better security guarantees (e.g., Vyper), different automated tools to aid vulnerability identification in the code (e.g., Mythril), and making use of formal verification (e.g., using the K framework). Examples of reactive approaches include pausing a contract once an attack is identified, as well as making use of proxy contracts, which facilitates upgrades (and hence bug fixing).

Figure 1. Vulnerability types and root causes

Figure 2. Attacks exploiting different vulnerabilities types and the consequences
of such attacks

Figure 3. Guiding principles and best practices for writing secure smart contracts

Discussion & Key Takeaways

  • Application layer attacks caused the largest financial losses thus far; hence, special attention should be granted to that layer.
  • Ethereum developers need better languages to develop smart contracts, with better security guarantees.
  • Ethereum developers need more secure tools to write smart contracts.
  • Existing studies largely focus on securing smart contracts (a DApp’s backend), ignoring the DApp’s frontend and interactions between the backend and frontend.


  • Provide practitioners (e.g., auditors and smart contract developers) with a holistic overview of the security issues in Ethereum, their root causes, and potential defenses.
  • Point to specific tooling needs, such as better programming languages with enhanced security guarantees.

Thank you a lot for this interesting post! Did the authors also discuss the specific motivation and issues that they encountered when designing their taxonomy?
In some cases it seems hard to me to properly track down one specific root cause of a vulnerability, e.g., when distinguishing between ‘Smart contract programming’ and ‘Solidity language and toolchain’. Already when talking about reentrancy it seems to me that this issue could be argued to root in different causes: One could argue that it is programming problem, but also partly that it is an issue of the Solidity language to not properly account for the ‘reactive’ nature of smart contracts (e.g. by offering locking functionalities, concurrent data structures, etc.). One could even argue that it is an Ethereum design issue (of the Ethereum Virtual Machine) not to distinguish between value transfer and contract invocation.
What do you think?


Hi @clara.schneidewind. Thanks for your insightful remarks :)
Here are my thoughts.

In terms of the taxonomy, the authors did not provide insights in terms of the chosen vulnerability names. Rather, they linked each vulnerability with a reference that first reported the issue. The names, at least those in the application layer, do seem to match the names as adopted or known in the community (for instance, take a look at https://swcregistry.io/).

With regards to the root cause, well, I get where you are headed and I do see valid points in your argument. We would have to discuss each vulnerability case by case. For the time being, let’s focus on the reentrancy example.

In my opinion, the authors got the reentrancy root cause correctly. To your point, when reentrancy first appeared, the developers of the hacked contract probably did not understand the full semantics of the language; hence, they did not know of the interactions that could lead to the disastrous DAO hack. Another person, however, did understand the language semantics and the underlying contract interactions enough to hack the contract. Strictly speaking, the buggy contract behaved exactly as it should have. Hence, the root cause of the bug was due to an error in logic; the underlying execution model and the language are not to be blamed (they worked as expected).

Still, we cannot ignore the issues with Solidity. The way I see this is that Solidity exposes a syntax very similar to well-known imperative programming languages, but its semantics is far from different from the languages it borrows its syntax from (e.g., C++ and JS). My theory is that inexperienced blockchain developers assume certain semantics in Solidity to hold in the same way it does in the context of other languages resembling Solidity’s syntax. These assumptions, however, easily break in the context of Ethereum, leading to contracts exposing vulnerabilities and being prone to attacks. To avoid the latter, smart contract programming languages should force developers to think in terms of the underlying execution environment, providing syntactic constructs that directly capture that.

Please let me know of your thoughts.



thanks for your thoughts on that! I can totally follow your line of argumentation.
Still, one could take the rather extreme standpoint and claim that following this reasoning there are actually no bugs on the Solidity level, but only non-standard features, since the semantics of Solidity is not (formally) specified, and hence could be considered to be only defined by its compilation to EVM bytecode.
So technically each Solidity contract behaves as it should, and it’s always due to the programmers lacking knowledge about the compilation if there is a disagreement between what he wanted to achieve and the actual outcome ;)

Of course, I acknowledge that this position is pretty extreme and that one can argue that the Solidity documentation provides some form of partial specification, so that at least when the compilation clearly deviates from the behaviour described there, it can be considered a language bug. But that’s still somewhat shaky ground (in particular considering that the documentation is changing quite often and usually proceeds by examples). It would be really nice to eventually have some proper Solidity language specification. Then one could maybe also more precisely define a taxonomy of root causes along the following lines:

  • Smart contract programming bug: mismatch between the intended program logics and the program logics according to the (Solidity) language specification
  • Solidity bug: mismatch between the Solidity language specification and the EVM byte code semantics of the compiled contract

Actually, one could also think about whether ‘Ethereum design and implementation’ bugs should be characterised making a similar distinction between real bugs in the design (mismatches between the consensus protocol and the desired theoretical properties) and implementation bugs (mismatches between the consensus protocol and its implementation). For the usability aspect things might still not be so clear though… What do you think?


Right on! Your suggestion would certainly make the authors’ classification more objective. But, as you pointed out, it relies on research material that is currently non-existent (e.g., a formal spec of Solidity).

For the “Ethereum design and implementation” related issues (data layer, consensus, and network), it might not be a clear cut either, as not all parts of the Ethereum are formalized and probably never will be (e.g., the Ethereum Go client has a big code base, with over 350 KLOC). So, not everything can ultimately be an objective classification.

Your concerns do have validity, though. In summary, it seems fair to point out that the authors did not properly document: a) rationale for name choices; b) rationale for selecting their vulnerability+attacks corpus; c) rationale behind the identification of the root cause of each vulnerability. As you raised awareness, the latter appears to contain some level of subjectivity. Would you agree?


Sorry for the late reply, I would totally agree :)


Since the publication date there have several new vulnerabilities found in the wild such as flash loan and composability exploits. While the authors could certainly revisit and update the study that alone may not prevent new vulnerabilities.

Are there any takeaways in regards to reducing exposure to vulnerabilities that have not yet been discovered?

1 Like

I personally think any holistic approach to a system’s security taxonomy (at least in the context of cryptonetworks) must be constrained to foundational layers (perhaps the data, consensus and network layers as the the authors put it).

Breaching into an application layer may open a can of worms precisely due to the extremes that @clara.schneidewind pointed out.

Given some of the intrinsic drawbacks of the EVM, it’s easy to write a user-facing language that mishandles things like abstraction, dispatch, and other mission-critical elements. Faults in a compiler may entail catastrophic consequences from the perspective of the users of the application, whereas strictly from an EVM perspective, the application was executed exactly as instructed.

This was huge point of contention in the aftermath of the DAO hack as debates turned philosophical around which component of the system was at fault. Something to think about in the context of a risk taxonomy: do taxonomies assign blame?

Anyways… great review of precedents and great summary @lnrdpss


On the topic of prevention, one could take the approach of exclusively interacting with contracts that have been extensively audited and verified through formal methods.

The obvious caveat is that this approach is not very practical, and formal verification is not a panacea. This blog post is one of my favorite articles on this topic as it synthesizes some of the existing limitations very nicely.

From the key takeaways section:

  • Smart contract vulnerabilities are more like vulnerabilities in other systems than the literature would suggest.
  • A large portion (about 78%) of the most important flaws (those with severe consequences that are also easy to exploit) could probably by detected using automated static or dynamic analysis tools.
  • On the other hand, almost 50% of findings are not likely to ever be found by any automated tools, even if the state-of-the-art advances significantly.
  • Finally, manually produced unit tests, even extensive ones, likely offer either weak or, at worst, no protection against the flaws an expert auditor can find.

This is a great summary, I always enjoy these types of infographics where all the information is laid out cleanly as such. It’s not a surprise to me personally that most of the issues are ultimately at the application layer as we see the same thing with the internet’s OSI stack, or in other words its usually not the problem of the technology itself but how it is utilized in improper or naïve ways. Lower level infrastructure usually tends to be more battle-hardened and ossified, while the (sometime dangerous but exciting) innovation happens at the more flexible application layer.

I wonder how many of these issues like reentrancy could be solved by changes to Solidity itself for additional safety protections against known as preventable issues, but I feel at this point any breaking changes it would cause simply isn’t worth it. The risks of a network fork isn’t worth the additional ease of development unfortunately. I think the best path we collectively can take now is to inform smart contracts developers, as well as users, of all of these issues and provide as much resources on mitigation. Is there a way we can make smart contract development more resilient against known implementation issues in a backwards compatible manner? As more developers join the space, the same implementation mistakes will continue to be made.


This particular area is where the Avalanche team has made great strides, and the recent Snow/Snowball/Slush protocols establish new methods of making reentrancy less problematic.

Reentrancy and concurrency are big issues that really need to be addressed more sufficiently.


@cipherix and @Zach Thanks for your reply and welcome to the forum :slight_smile:

On @Zach ´s argument, changes to Solidity do not have to necessarily lead to changes in the EVM, and hence, not a hard fork. I agree that educating users and programmers is definitely one direction, and a lot has been accomplished in the past two years; for instance, by cataloguing known issues and reporting them (I refer to some of these in the Notable Works section).

It is indeed possible to improve Solidity, the underlying compiler, and even having new languages altogether, while being backwards compatible (at least, from the EVM standpoint). On the first two cases (Solidity + compiler improvements), better static analysis could identify reentrancy cases, as well as integer overflow/underflow. This is not much different that what mythril and slither already do. Alternatively, these could be pushed to the code generator component of solc as a means to generate code that reverts in case any of the two occur during runtime, with the downside of higher gas costs per transaction.

What I would argue is that we do need better languages, with constructs suited for a safe development of smart contracts. Solidity design was heavily influenced by other languages, whose semantics differ a lot (and new developers may assume them to be just the same - an error that can cost a lot of money).

A good research approach to foster the development of new EVM-compatible languages is to plug an EVM-based backend into known compiler frameworks, like LLVM. This has the potential to boost research on better smart contract languages, as researchers would not have to worry much with the code generation part, and redo-it every single time for a new language. In fact, this is the exact proposal of the EVM-LLVM project (https://etclabscore.github.io/evm-llvm-website/), currently in alpha stage.


@Barry Unfortunately, the authors do not provide key-takeaways for reducing exposure to vulnerabilities that have not yet been discovered. They only provide defense approaches for the attacks they list in the paper.

@cipherix Your points make sense; however, they are not a guarantee even if followed religiously. For instance, it does make sense that one should only interact with contracts that have been audited. However, in practice, many audited contracts change from time to time, and these changes are often not audited. Tracking which dependencies have been audited and which have not is tricky and time-consuming, as it has a cascading effect (certainly deserves proper tool support, which is lacking). Formal methods are an interesting approach, but it would only work if all the third-party contracts one interacts with are also formalized, which is not feasible in practice (or unrealistic to expect). Formal specs can easily be out of synch in regards to a constantly evolving code base; also, they have a very heavy upfront investment in time and effort. Plus, formal specs, if done incorrectly or if found to contain mistakes, also pose a threat on their own.

So, in summary, I think the way to go to reducing exposure to unknown vulnerabilities is to have a contingency plan, aided by extensive use of defensive programming techniques (e.g., safety guards in the code, such as emergency shutdowns and invariant checks) and runtime monitoring as a means to unveil suspicious actions. Some of these have been discussed in more details in another thread: What constitutes a good test suite?. Let me know of your thoughts :slight_smile: