PySpark RDD | reduceByKey method
Start your free 7-days trial now!
PySpark RDD's reduceByKey(~)
method aggregates the RDD data by key, and perform a reduction operation. A reduction operation is simply one where multiple values become reduced to a single value (e.g. summation, multiplication).
Parameters
1. func
| function
The reduction function to apply.
2. numPartitions
| int
| optional
By default, the number of partitions will be equal to the number of partitions of the parent RDD. If the parent RDD does not have the partition count set, then the parallelism level in the PySpark configuration will be used.
3. partitionFunc
| function
| optional
The partitioner to use - the input is a key and return value must be the hashed value. By default, a hash partitioner will be used.
Return Value
A PySpark RDD (pyspark.rdd.PipelinedRDD
).
Examples
Consider the following Pair RDD:
[('A', 1), ('B', 1), ('C', 1), ('A', 1)]
Here, the parallelize(~)
method creates a RDD with 3 partitions.
Grouping by key in pair RDD and performing a reduction operation
To group by key and perform a summation of the values of each grouped key:
[('B', 1), ('C', 1), ('A', 2)]
Setting number of partitions after reducing by key in pair RDD
By default, the number of partitions of the resulting RDD will be equal to the number of partitions of the parent RDD:
# Create a RDD using 3 partitionsnew_rdd = rdd.reduceByKey(lambda a, b: a+b)
3
Here, rdd
is the parent RDD of new_rdd
.
We can set the number of partitions of the resulting RDD by setting the numPartitions
parameter:
new_rdd = rdd.reduceByKey(lambda a, b: a+b, numPartitions=2)
2