Section 6 of 9

Build
+15 Shields

Multi-Slot Token Storage + Weight Calculation

Multi-Slot Token Storage + Weight Calculation

What You Are Building

Temporal function AMMs support pools with up to 8 tokens. Storing and reading weights for that many tokens on every swap requires a gas-efficient storage layout. You will implement the system that QuantAMM uses: packing multiple weight values into single 256-bit storage slots, then unpacking and interpolating them at runtime.

In this section, you will implement:

  • Bit packing: store 4 weights per 256-bit storage slot using 32-bit fixed-point encoding
  • Bit unpacking: extract individual weights and scale them back to 1e18 runtime precision
  • Weight normalization with dust collection on the last element
  • Guard rail clamping that limits how far a weight can move per update
  • Block-by-block interpolation in getNormalizedWeights()

Why Pack 4 Weights Per Slot?

Each EVM storage slot holds 256 bits. A naive approach stores each weight as a full uint256, burning one SLOAD (2100 gas cold, 100 warm) per weight. For an 8-token pool, that is 8 SLOADs just to read weights.

QuantAMM packs 4 weights into a single slot. Each weight uses 32 bits, giving 9 decimal digits of precision (max value ~4.29e9). Since weights are fractions between 0 and 1, storing them in 1e9 precision (e.g., 0.25 = 250000000) fits comfortably in 32 bits. An 8-token pool needs only 2 SLOADs instead of 8.

The remaining question: why not pack 8 weights per slot (32 bits each fills 256 bits exactly)? QuantAMM reserves some bits per slot for metadata and alignment padding, so the practical limit is 4 per slot. This matches the architecture you will build.

Bit Packing and Unpacking

To pack a weight at position index within a slot:

offset = index * 32
scaledWeight = weight / (1e18 / 1e9)    // Scale from 1e18 to 1e9
mask = ~(0xFFFFFFFF << offset)           // Clear 32 bits at offset
slot = (slot & mask) | (scaledWeight << offset)

To unpack:

offset = index * 32
scaledWeight = (slot >> offset) & 0xFFFFFFFF
weight = scaledWeight * (1e18 / 1e9)     // Scale back to 1e18

The & 0xFFFFFFFF mask isolates exactly 32 bits. This is standard bit manipulation, but getting the offsets wrong even by one bit corrupts all weights in that slot, and the pool will silently produce wrong swap outputs. No revert, just bad prices.

Handling Non-Aligned Token Counts

A 4-token pool fills one slot perfectly. A 6-token pool needs ceil(6 / 4) = 2 slots, with the second slot only using 2 of its 4 positions. The unused positions stay zero.

Your setWeights function must compute the slot index and position for each token:

slotIndex  = tokenIndex / WEIGHTS_PER_SLOT
posInSlot  = tokenIndex % WEIGHTS_PER_SLOT

For token index 5 in a 6-token pool: slotIndex = 1, posInSlot = 1. This token's weight lives in the second 32-bit segment of slot 1.

Normalization and Dust Collection

Weights must sum to exactly 1e18. After computing raw weights and scaling them through 32-bit packing (which loses precision), rounding errors can make the sum differ from 1e18 by a few wei.

The standard fix is "dust collection": compute all normalized weights, sum them, and add (1e18 - sum) to the last weight. This ensures the invariant sum(weights) == 1e18 holds exactly. QuantAMM does this in every weight update.

Guard Rail Clamping

Before writing new weights, each one must be clamped:

  • Within absoluteWeightGuardRail distance of its previous value
  • Within the global [MIN_WEIGHT, MAX_WEIGHT] bounds (1% to 99%)

This prevents any single update from dramatically shifting the pool's allocation, which would create large arbitrage opportunities at LP expense.

Block-by-Block Interpolation

The getNormalizedWeights() function is called on every swap. It does not simply return the stored target weights. Instead, it interpolates linearly between the previous weights and the target weights based on how many blocks have elapsed:

elapsed = block.number - lastUpdateBlock
total   = targetBlock - lastUpdateBlock
currentWeight = prevWeight + (targetWeight - prevWeight) * elapsed / total

This gradual transition is critical for MEV resistance. If weights jumped instantly, a searcher could sandwich the weight update transaction and profit from the sudden price change. With block-by-block interpolation, the weight shift is spread over many blocks, making sandwich attacks far less profitable.

Your Task

Implement all seven TODOs in the starter code. Start with packWeight and unpackWeight (verify they round-trip correctly), then normalizeWeights, clampWeight, setWeights, getWeights, and finally getNormalizedWeights. The interpolation logic in TODO 7 ties everything together.

Your Code

Solution.sol
Solidity
Loading editor...

Requirements

packWeight uses bit shifting to store scaled values
packWeight clears bits with a mask before writing
unpackWeight extracts with right shift and mask
Weight scaled between 1e18 and 1e9 precision
normalizeWeights sums all weights
normalizeWeights applies dust collection on last element
clampWeight enforces absoluteWeightGuardRail
clampWeight enforces MIN_WEIGHT and MAX_WEIGHT
setWeights handles slot indexing with division and modulo
setWeights copies current to previous before writing
getNormalizedWeights interpolates between prev and target
getNormalizedWeights returns target weights when past targetBlock