Reciprocal Rank Fusion (RRF)
Reciprocal Rank Fusion is a technique for merging several ranked result lists — produced by different retrievers over the same corpus — into a single unified ranking. For each document, RRF sums 1/(k + rank) across the lists in which it appears, where k is a smoothing constant typically set to 60. A document at rank 1 in one list contributes 1/61; at rank 10 it contributes 1/70. The critical property is that RRF operates on ranks, not scores, so it does not need any calibration between retrievers whose score distributions are incomparable — BM25 relevance scores and cosine similarities, for example. That property has made RRF the default fusion method for hybrid search combining BM25 and vector retrievers, outperforming weighted-score fusion in many published and internal benchmarks with essentially no tuning.
Example
A support-docs search blends a BM25 retriever and a dense-vector retriever. Attempts at weighted-score fusion are brittle — BM25 scores range 0 to ~25 depending on corpus stats, cosine similarities sit between 0 and 1, and the "right" weight drifts as the corpus grows. The team switches to RRF with k=60. No per-deployment tuning is required, top-10 recall rises from illustrative 0.81 (BM25 alone) and 0.85 (vector alone) to 0.91 on the combined list, and the fusion code becomes a three-line aggregation that is stable across index rebuilds.
Put this into practice
Build polished, copy-ready prompts in under 60 seconds with SurePrompts.
Try SurePrompts