Research
5 min read

Improving Solana's Fee Markets: A New Approach to Solving the Fee Estimation Problem

Written by
Eclipse Labs
Published on
February 28, 2025

In a preceding article, we explored the SVM’s fee markets and how they work. The key takeaway from that article is that local fee markets are necessary for both UX and scalability. In this article, we will highlight the first of our new proposals to improve Solana and the SVM’s fee markets.

Thanks to @PGarimidi, @0xNagu, @0xtaetaehoho, @liamvovk, @odesenfans, @convoluted_code, Tao Stones, and Alex Petrosyan for help refining this.

Key Insights

  1. Solana’s fee markets are currently broken in many ways. The most telling evidence is the lack of correlation between fees paid and the probability of inclusion.
  2. The fee markets have two significant problems: an implementation bug and a structural issue. The structural issue is a well-established problem that First Price Auction (FPA) Transaction Fee Mechanisms (TFMs) face–the “fee estimation” problem.”
  3. The fee estimation problem does not imply that FPAs or bidding-based TFMs are broken; in fact, the only known mechanisms that work in the permissionless setting are FPAs with a “restricted bid space”[1].
  4. Restricting the bid space by setting a protocol-enforced base a la EIP-1559 can unnecessarily increase wait times and result in underutilized blocks.
  5. TFMs need a target resource limit that is different from the maximum resource limit; it is (likely) impossible to adjust fees reliably during sustained high demand without distinguishing between the two.

Solana’s Fee Market Problems

Solana’s fee markets are currently broken in several ways (you can say they have multidimensional problems :) ). These problems are evident in the transaction landing and fee-paying experience. As you can see from the data in Figure 1, the fees paid to perform USDC transfers (an uncontended transaction) vary wildly, with p10 fees being two orders of magnitude higher than p90 fees. This data suggests that many users are paying significantly more than necessary (the lowest price required for inclusion.)

Fig 1: Cost of a USDC transfer (P10, Median, and 90) | Source

Users bidding for contended accounts have it significantly worse, as the contention on the accounts they want to access makes it even harder to estimate the correct fees to pay.

Short description of the problem by Nick Pennie of Helius | Source

Two problems are primarily responsible for this behavior: implementation bug(s) in the Agave client and a fundamental flaw in the SVM’s TFM—the fee estimation problem.

The Implementation Bug(s)

Solana’s fee markets are currently non-deterministic because transactions are not operated on by priority until they reach the scheduler. Without getting too into the weeds of how Agave works, there are multiple stages that a transaction goes through between reaching the leader and broadcasting its execution results.

Fig 2: Agave pipeline

In the packet ingress and SigVerify stages, there is no priority ordering of transactions, so transactions are operated on FIFO. When there is a lot of demand, Ingress and SigVerify are overwhelmed and can not process all transactions in time. The scheduler can order transactions by priority, but it currently only executes transactions that have already been certified by Ingress and SigVerify (which do not order by priority). This inevitably leads to the late processing of high-value transactions. Additionally, random load shedding occurs when the Ingress and SigVerify queues are full.

Ideally, the pipeline should be fast enough to manage the load, and when overwhelmed, the lowest-value transactions should be dropped first, but that is not the case for now. Fortunately, these are well-known problems (they have been objects of discussion since January 2024).

It’s important to clarify that we (and many Solana core contributors) disagree with the increasing fee approach to this problem[4]. We have two primary reasons:

  1. Spammers will stop if they have no economic incentive to continue.
    Except for an entity that wants to attack the network (in which case, only the attacker should be dealt with), there is no reason for anyone to spam transactions if the act does not improve their probability of inclusion. If we fix the bugs, spam should significantly reduce.

  2. 50k TPS is child’s play.
    The SVM is being designed to handle 1M TPS, and 50k TPS is a mere 5% of the minimum pre-scheduling load required for 1M TPS. Handling 50k TPS gracefully should not require economic disincentivization.

Other (minor) issues with Agave, such as slow scheduling, contribute to this behavior, but the good news is that Solana's core teams are aware of these issues and are working to address them. That brings us to the more insidious problem–a structural issue that all FPA TFMs face–the fee estimation problem.

The “Fee Estimation” Problem

Solana’s fee markets are vanilla FPAs; users bid what they want to pay for a transaction, and validators (usually) order transactions by price. The problem with vanilla FPAs is users do not know what to pay. Tim Roughgarden’s paper (Transaction Fee Mechanism Design for the Ethereum Blockchain: An Economic Analysis of EIP-1559) contains a beautiful analogy for why FPA TFMs are broken.

Shopping on Amazon is a lot easier than buying a house in a competitive real estate market. On Amazon, there’s no need to be strategic or second-guess yourself; you’re either willing to pay the listed price for the listed product, or you’re not. The outcome is economically efficient in that every user who buys a product has a higher willingness to pay for it than every user who doesn’t buy the product. When pursuing a house and competing with other potential buyers, you must think carefully about what price to offer to the seller. And no matter how smart you are, you might regret your offer in hindsight—either because you underbid and were outbid at a price you would have been willing to pay, or because you overbid and paid more than you needed to. The house need not be sold to the potential buyer willing to pay the most (if that buyer shades their bid too aggressively), which is a loss in economic efficiency. Bidding in Ethereum’s first-price auctions is like buying a house. Estimating the optimal gas price for a transaction requires making educated guesses about the gas prices chosen for the competing transactions. From a user’s perspective, any bid may end up looking too high or too low in hindsight. From a societal perspective, lower-value transactions that bid aggressively may displace higher-value transactions that do not.

Said more formally, vanilla FPAs are not Incentive Compatible (IC). 

An IC TFM must satisfy three criteria:

  1. Myopic Block-producer Incentive Compatibility (MBIC): Block producers (a.k.a. validators/miners) should not profit by deviating from the prescribed mechanism or including fake transactions.

  2. Dominant Strategy Incentive Compatibility (DSIC) aka Straightforward Bidding: In stable conditions, the best bid (gas price) should be obvious, i.e., participants should not have to guess or game each other’s bids extensively for inclusion.

  3. Off-Chain Collusion–Proofness (OCA-Proof): Block producers and users should not be able to collude via side payments off-chain to improve their collective utility relative to honest on-chain behavior strictly.

A vanilla FPA requires users to guess how others bid, so it fails the second (user-level) IC criterion. On Solana, the local fee markets amplify this problem, as users must account for local and global contention when choosing a bid.

The ideal user experience would be a posted price mechanism where users are given a “take it or leave it” price for inclusion; unfortunately, demand often outstrips supply, and users need to be able to signal priority to block producers, so a posted price mechanism doesn’t work.

Fortunately, this is a solved problem; research suggests that in the blockchain setting, first-price auctions with exogenously restricted bid spaces are a (actually the only[2]) credible static[1] mechanism i.e., they can (usually) provide a posted-price UX without conflicting with any of the desired properties of the TFM.

EIP-1559 implements an FPA with a restricted bid space, and we’ll consider it briefly to understand why it works before considering our proposal.

EIP-1559 And How It Solves The Fee Estimation Problem

Without spending too much time, at the core of EIP-1559’s design is a dynamic base fee that adjusts based on the gas usage of the previous block. This design solves the bidding problem because there is a price that (practically) guarantees inclusion known to all users. While users can (and, in fact, must) bid above this price, the EIP-1559 base fee removes much of the overhead of blind fee estimation.

In summary, the base fee is like a posted price under stable conditions, so there’s no need for fee estimation. 

“Stable condition” refers to the state of demand; in stable conditions, demand is stable or gradually rising (or falling).

The key learning from EIP-1559’s design and relative success is that FPAs and bidding-based TFMs are not fundamentally flawed; the problem is having an unrestricted bid space.

We’ve established that EIP-1559 is a working solution to the fee estimation problem, so why not just implement it on Solana and call it a day?

Why Not Just Use EIP-1559?

The first problem would be that Solana (unlike Ethereum) has local fee markets, so a globally updating base fee would unnecessarily punish users bidding for uncongested accounts. This can be resolved by implementing a per-account EIP-1559, but even that comes with two major problems that we will briefly discuss:

  1. EIP-1559 offers unideal UX
    A well-studied inefficiency of EIP-1559 is that aggressively raising the base fee leads to significant delays in inclusion and block space underutilization.

    In simple terms, when demand sharply rises, EIP-1559 raises the base fee too high, and users must wait for it to settle down. This means some blocks are virtually empty because the base fee has not adjusted to the market rate. While underutilization is a feature that helps Ethereum maintain its target utilization, it is undesirable on Solana.

    Additionally, research suggests that this inefficiency is worsened in multidimensional fee markets, which Solana is heading toward with proposals like SIMD-184, which proposes setting a writable account data limit, and SIMD-197, which adds a new resource for memory bandwidth usage.

  2. Block space underutilization
    In addition to underutilization due to an extremely high base fee, EIP-1559 also causes underutilization by setting the target resource utilization to half of the maximum; essentially, the mechanism optimizes for using half of the available block space.

    Again, while this makes sense on Ethereum, it is highly undesirable for a high-performance chain like Solana (and Ethereum L2s).

With all of this in mind, it is obvious why a new design is necessary, which brings us to:

SIMD-253: Restrict the TFM Bidspace

Based on the lessons of EIP-1559 and all the literature surrounding it, this proposal attempts to solve the fee estimation problem by introducing:

  • a target utilization amount and
  • a new RPC method to Solana: getPriorityFee.

Introducing Target Utilization (Defining Congestion)

Solana currently doesn’t differentiate between target and maximum resource utilization; they are the same (12 M per account and 48M per block). While this aligns with the desire to maximize resource utilization, it results in poor fee market UX.

Using recent events as an example, during the $\$TRUMP$ and $\$MELANIA$ memecoin launches, resource utilization for both token accounts was around the maximum of 12M CUs. However, a large number of transactions could not make it on-chain. Unfortunately, the TFM could not tell if blocks were full because demand was equal to supply or if they were full because demand outstripped supply.

This led to a lousy landing experience for users, who, in response, increased their fees erratically (or not at all), making a bad experience even worse.

Additionally, the UX on Solana was worse than what would have been on an EIP-1559 TFM because Solana’s TFM cannot access “extra” capacity to alleviate a sudden spike in demand.

Improving on this necessitates introducing a target resource utilization that is less than the maximum. However, setting this value and defining how the rest of the TFM interacts with it is a complex task. The larger the slack (the difference between the target and maximum resource utilization), the better the performance during sustained high demand. However, setting the target utilization to half of the maximum utilization, like in EIP-1559, opposes Solana’s goal of maximizing resource utilization. If nodes can process double the load, then there’s room to “IBRL.”

With all of these things in mind, we propose setting the target utilization to 85% of the maximum utilization. It is high enough that most of the capacity is available at all times but low enough that the TFM can reliably detect when there is “over-demand.”

Note that our proposal treats the target utilization value differently from Ethereum where usage is optimized to converge to the target value.

getPriorityFee

getPriorityFee is a new RPC method that takes a list of accounts and returns the recommended fee (compute unit cost in lamports/CU) that should be paid by a transaction that wants to lock all the accounts in the list. The overarching idea is that, like the base fee in EIP-1559, paying this fee will be enough to secure fast inclusion in stable conditions. And because this mechanism considers all the accounts to be locked by the transaction, it also tackles the local fee market problem mentioned earlier.

How The Core Mechanism Works

In this design, RPC nodes maintain an in-memory cache that maps congested accounts to an AccountData struct).

FeeCache<Pubkey, AccountData>

The  AccountData struct contains:

  • the Exponential Moving Average (EMA) of the compute unit utilization of the associated account in the five most recent blocks,
  • the median fee for the paid by transactions locking the associated account in the most recent block n,
  • the recommended fee is that transactions that want to access the account in the next block (n + 1) should pay.

struct AccountData {	
ema_of_cu_utilization_over_the_last_five_blocks // n-4 through n,
median_fee_paid_in_block_n,
recommended_priority_fee_to_access_account_in_block_n_plus_one,
}

The median (instead of the previous recommended fee, i.e., for block n) is tracked because it allows the mechanism to detect changes in demand at a particular price point without setting a base fee.

An identical data structure is maintained for the global markets. 

struct GlobalData {	
ema_of_global_cu_utilization_over_the_last_five_blocks,
median_fee_global_in_block_n,
recommended_priority_fee_global_in_block_n_plus_one,
}

When a user makes a getPriorityFee request to an RPC node, the node checks the accompanying list of accounts to see if any are in the cache:

  • If one or more accounts from the list are in the cache, the cache returns the recommended priority fee for the most congested one.
  • If none of the accounts are in the cache, it returns the recommended global fee.

How The Recommended Priority Fee Is Calculated

The principle for calculating the recommended fee is simple:

  • If the account’s CU utilization EMA is less than the target utilization, reduce the recommended fee in proportion to the difference (down to the global median fee.)
  • If the account's CU utilization EMA exceeds the target utilization, increase the recommended fee proportionately to the difference.

Mathematically,

Let $n$ be the most recent block seen by the node (such that the next block is $o$).

Let $f_n^g$ be the global median fee in block $n$.

Let $f_o^g$ be the recommended global fee for block $o$, where $o = n + 1$.

Let $\mu_\lambda^g$ be the EMA of global compute unit utilization over the last five blocks.

Let $\mu_\tau$ be the target per block compute unit utilization.

Let $\theta$ be a sensitivity parameter.

The recommended global fee for block $o$ is determined by:

If $\mu_\tau^g > \mu_\lambda^g$:

$$f_o^g = f_n^g \times \exp\left( \theta \left( \frac{\mu_\lambda^g}{\mu_\tau^g} - 1 \right) \right)$$

If $\mu_\tau^g < \mu_\lambda^g$:

$$f_o^g = f_n^g \times \left( 1 - \exp\left( \theta \left( \frac{\mu_\lambda^g}{\mu_\tau^g} - 1 \right) \right) \right)$$

Per-account fees are slightly more involved because they must also consider global fees. Regardless of what the contention for any account is, if the recommended fee for an account is lower than the global recommended fee, then the DSIC move is to pay the global recommended fee, hence the extra terms in the relation:

Let $n$ be the most recent block seen by the node (such that the next block is $o$).

Let $f_n^{\alpha}$ be the median fee for account $\alpha$ in block $n$.

Let $f_o^{\alpha}$ be the recommended per account fee for account $\alpha$ in block $o$, where $o = n + 1$.

Let $\mu_{\lambda}^{\alpha}$ be the EMA of compute unit utilization of an account $\alpha$ over the last five blocks.

Let $\mu_{\tau}^{\alpha}$ be the target per account compute unit utilization.

Let $\theta$ be a sensitivity parameter.

The recommended per account fee for block $o$ is determined by:

If $\mu_{\lambda}^{\alpha} > \mu_{\tau}^{\alpha}$:

$$f_o^{\alpha} = \max \left\{ f_n^{\alpha} \times \exp \left( \theta \left( \frac{\mu_{\lambda}^{\alpha}}{\mu_{\tau}^{\alpha}} - 1 \right) \right), f_o^g \right\}$$

If $\mu_{\lambda}^{\alpha} < \mu_{\tau}^{\alpha}$:

$$f_o^{\alpha} = \max \left\{ f_n^{\alpha} \times \left( 1 - \exp \left( \theta \left( \frac{\mu_{\lambda}^{\alpha}}{\mu_{\tau}^{\alpha}} - 1 \right) \right) \right), f_o^g \right\}$$

While the equations look complex, the underlying principle is simple: the recommended fee for the next block considers the median fee for the most recent block and the difference between target and actual utilization (an estimate of how much block space is available). If utilization is not equal to the target, adjust the fees exponentially higher or lower based on the difference; the more significant the variance, the harsher the updates.

An exponential controller was chosen for two reasons[5]:

  1. Given that there is no desire to impose additional costs on users unless they bid for congested accounts, the slack (difference between target and maximum utilization) must be as small as possible. Because of the small slack, aggressive responses are crucial even if they cause some loss in efficiency. And because the proposed TFM does not enforce the base fee, as long as there is sufficient block space, users with lower willingness to pay can still be included.

  2. Pai and Resnick (2024) show that transaction arrival can be modeled by a Poisson's process, and exponential controllers are effective for this class of problems given the additional desiderata.

Cache Updates And Eviction

The cache is refreshed at the end of each slot; the EMAs and recommended fees are calculated according to the relation above.

Since the cache only needs to keep track of congested accounts (accounts with compute unit utilization EMAs greater than 85% of the per-account target utilization $\mu_{\lambda}^{\alpha} > 0.85 \mu_{\tau}^{\alpha}$ ), it can have a bounded size. However, it’s impossible to determine which accounts meet this criterion without monitoring a superset of accounts that already meet it. With this in mind, we set the cache to track up to $5 \times \text{number of accounts a transaction can lock} \times \lceil \frac{\text{max block CU utilization}}{\text{target per account CU utilization}} \rceil$ accounts with utilization greater than 60% of the target utilization.

This value was chosen because:

  • the EMA is tracked over 5 blocks.
  • the number of CUs counted for an account is given by the sum of the total CUs of all the transactions that write to the account.
  • the maximum number of accounts that can satisfy the criteria of being congested in a single block is given by: $\text{ceil} \left( \frac{\text{max block CU utilization}}{\text{target per account CU utilization}} \right) \times \text{number of accounts a transaction can lock}$ .

Multiplying the three values together is a simple heuristic for determining an upper bound on the number of accounts that could meet the congestion criteria in the given time frame. Under the current conditions (48 M CUs per block, 64 accounts per transaction and 0.85 * 12M per CUs per account), the size of the cache works out to 1600 accounts.

Keep in mind that this cache is provisioned for the worst case scenario but to prevent bloat during steady state, it will only track accounts whose CU-utilization EMA is at least 60% of the target utilization.

Finally, the cache will have a (maximum) cache churn rate of $5 \times \text{number of accounts a transaction can lock}$ accounts per block to allow the tracking of new candidates; this ensures that incumbents do not starve new candidates.

That wraps up the description of how the mechanism works; to summarize:

  • Introduce target utilization.
  • Track congested accounts (based on EMA of compute unit utilization over 5 blocks relative to target utilization)
  • Track median fees (for congested accounts and globally.)
  • Calculate recommendations based on the degree of congestion and the median fees.
  • Introduce a new RPC method that exposes the recommendations.

How Does This Differ From The Status Quo?

Currently, users are left to estimate the appropriate fee to pay themselves. The most helpful tool is the getRecentPrioritizationFees method, which takes a list of accounts, checks a cache of the last 150 blocks, and returns the fee paid by a transaction that locks all the accounts in the list. But this approach is flawed for two reasons:

  1. It is not self-adjusting: The method returns past data with little concern for how recent it is, how the market changes, or whether the price was appropriate. It’s better than bidding blindly but is a poor mechanism overall.

  2. It’s a waste of resources: To serve responses to getRecentPrioritizationFees, the last 150 blocks of data are cached. This is unnecessary and harmful, as serving data as old as 150 blocks increases the probability of over or underpaying.

Additionally, because the network does not differentiate between target and maximum resource utilization, there is no tolerance during periods of shock demand.

In the proposed system, only the freshest data is stored, changes in utilization are factored in when setting the price for the next block, and there is some bandwidth to dampen spikes in demand. The proposal is less resource-intensive, more accurate, and guarantees better UX than the current system.

Benefits And Challenges Of The Proposed System

Benefits

  1. Solves the fee estimation problem
    This proposal aims to address the fee estimation problem without introducing any vulnerabilities. The system's core is unchanged, preserving safety while significantly improving UX.

  2. It does not affect core validator code
    Another significant benefit of this proposal is that it can be implemented without modifying the core validator codebase. Unlike a base-fee approach where validator nodes must waste compute and memory to keep track of the per-account base fees, under this system, they can continue to run as they do now and still realize the benefits that base fees would bring to the fee estimation problem.

  3. Allows blocks to be maximally packed at all times.
    Unlike a base fee approach, where transactions that don’t pay at least the base fee are dropped, this approach allows for including those transactions if there’s space for them. This means blocks will always be maximally packed.

In a nutshell, the proposal solves the fee estimation problem minimally and in a way that aligns with the SVM's high-performance tenets.

Challenges (And Why The Posted Price Does Not Need to Be Protocol Enforced) 

First, it is essential to establish that the proposed system is as credible as a vanilla FPA because it does not deviate from an FPA's core mechanism. However, one could argue that the network is unaware of this particular mechanism and that there is no reason for users to abide by it and, by extension, offer no meaningful improvement over an FPA.

So, what happens when users don’t follow the recommendations? Does the system break down? This question can be reduced into two broad categories, and an argument for soundness can be made for each category.

Category 1: Using a higher fee than the recommendation

What happens if users bid a value $kr$ that is some multiple of the recommended fee ($r$) (where $k > 1$)? Doesn’t this guarantee better inclusion than using $r$, and if it does, why won’t users adopt the approach?

Factually, if a significant amount of users adopt $kr$ instead of $r$ to improve their chances of inclusion, the system will not work as intended and will essentially revert to an FPA with a base fee $r$.

However, this is not unique to the proposed system. In fact, all known credible mechanisms (because they are FPAs) are vulnerable to this failure mode. Even protocol-enforced base fees don’t tackle this problem any more effectively than the proposed mechanism because users (following the same logic) can bid some value $kb$ that is a multiple of the base fee ($b$) (where $k > 1$).

This property is not a problem but an inherent feature of FPA-based TFMs–allowing uncapped bids to the upside allows users to incentivize block producers to prioritize their transactions without resulting to Off-chain Agreements (OCAs) so it is necessary.

In a nutshell, this sort of behavior is to be expected, but there is empirical evidence from the performance of EIP-1559 that it is not a problem.

Category 2: Using a lower fee than the recommendation

What happens if users pay lower than the recommended fee?

This is also a non-issue as users want to maximize their utility (i.e., chances of inclusion), so they’ll be willing to pay any amount they do not consider excessive if it improves their chances of inclusion. Given (deterministic) priority fee-based prioritization, users are incentivized to pay any amount that is less than or equal to their “private valuation” of inclusion of the transaction.

In base-fee TFMs, users do not have this expressivity, and this approach can be advantageous in some contexts. For example, on Ethereum, where there is an explicit desire to allocate block space to only the most valuable transactions, it makes sense to drop transactions that don’t pay the base fee. On Solana, the consensus is that blocks should always be as full as possible, so there is no benefit in dropping lower-bid transactions if there is space for them. The additional expressivity is not a problem.

In conclusion, the fact that the mechanism is not enforced does not open it to new attack vectors that other credible mechanisms do not face. Its design makes perfect sense when considering the environment in which it is to be used.

However, it is susceptible to an attack vector that FPAs face.

Attacking FPAs

A well-explored attack vector that vanilla FPAs suffer from is market spoofing, where the auctioneer (in this case, a block producer or a cabal of block producers) includes fake transactions to portray demand and make other users pay more than they need to. This is a non-issue on a trusted sequencer network like Eclipse, but on a permissionless network like Ethereum or Solana, it is a real problem. Ethereum addresses it by setting a base fee (the more significant part of the transaction fee) and burning it. This approach is good (most of the time), but it comes with all the negative externalities of a base fee discussed above.

Solana had a similar mechanism that burned 50% of all transaction fees (base and priority). However, this approach incentivized out-of-protocol fee markets, and the priority fee burn has been removed with SIMD-0096.

While our proposal does not increase the surface area of the attack, its potential is fully realized when this attack vector is dealt with. Our next blog and proposal will discuss our approach to mitigating market spoofing without the externalities of an EIP-1559 base fee.

Conclusion

We’ve examined Solana’s fee markets in-depth and analyzed the biggest issues and potential solutions. The fee estimation problem, which is the focus of this proposal, is a well-established problem with a known solution—a restricted bid space. The proposed mechanism, although simple, solves the problem in a manner that is aligned with the environment in which it is to be used without sacrificing any of the requirements of a credible and incentive-compatible TFM.

You can read more about and contribute to SIMD-253 here.

Footnotes

  1. A protocol is credible if running the mechanism is incentive-compatible for the auctioneer; that is, if the auctioneer prefers playing by the book to any safe deviation.
    Static in this context means there is no interaction between the agents and the auctioneer. A first-price auction is an example of a static mechanism, while an ascending auction is an example of a dynamic one.
  1. According to Akbarpour and Li's (2019) major finding, first-price auctions with a restricted bid space are the only static credible mechanism.

  2. Adapted from Tim Roughgarden’s Transaction Fee Mechanism Design for the Ethereum Blockchain: An Economic Analysis of EIP-1559.
  1. While we disagree with increasing fees to prevent spam, there is a lot of value to write-lock fees, especially in the context of maximizing parallelism, but that is a topic for another day.
  1. We are still actively researching potential improvements to the controller and are considering a PID design.

References

  1. Transaction Fee Mechanism Design for the Ethereum Blockchain: An Economic Analysis of EIP-1559. Roughgarden, T., 2020.
  2. Credible Mechanisms Akbarpour, M., Li, S., 2019.
  3. DYNAMIC TRANSACTION FEE MECHANISM DESIGN Mallesh, P., Resnick, M., 2024.
  4. Transaction Fees on a Honeymoon: Ethereum’s EIP-1559 One Month Later Daniel Reijsbergen, D., Shyam, S., Barnabe, M., Stefanos L., Stratis S., Georgios P., 2022
  5. Exploring Multidimensional Gas Markets on Aptos: A New Frontier for Resource Management. Goren, G., Garimidi, P., 2024.
  6. Base Fee Manipulation in Ethereum’s EIP-1559 Transaction Fee Mechanism Azouvi, S., Goren, G., Heimbach, L., Hicks, A., 37th International Symposium on Distributed Computing (DISC 2023) Article No. 6; pp. 6:1–6:22. DOI: https://doi.org/10.4230/LIPIcs.DISC.2023.6
  7. Limit number of accounts that a transaction can lock Solana pull #22201.

Share this post