I have moved into a new home! Redirecting shortly to https://ndeepak.com/posts/2019-05-11-spark-topn/
Getting top-N elements in Spark
The documentation for pyspark top()
function has this warning:
This method should only be used if the resulting array is expected
to be small, as all the data is loaded into the driver's memory.
This piqued my interest: why would you need to bring all the data to the driver, if all you need is a few top elements?
The answer is: it does not load all the data into the driver’s memory.
Let’s look at the source code.
There is a lot happening in those few lines.
- The idea is to split up the data (mapPartitions). The number of splits depends on the data source and cluster configuration.
- On each slice, we get the top N elements (heapq.nlargest). This part needs no data movement across nodes.
- Next, we find the top N within those (.reduce(merge)) elements. To see exactly how, let’s first look at its source:
We collect the top-N elements from each partition! We then run reduce() from Python’s functools package on that. This runs on the driver, and it gets us the top-N elements overall.
So yes, we do bring more than “top N” elements to the driver, but definitely not “all the data”. If you’re only collecting say top 100 elements, there is no cause for concern.