Counting number of negative values in PySpark DataFrame
Start your free 7-days trial now!
Consider the following PySpark DataFrame:
+---+---+| A| B|+---+---+| -3| -4|| -5| 6|+---+---+
Counting the number of negative values in a single column
To count the number of negative values in a single column:
Here, the df.filter(~)
method returns all the rows in the PySpark DataFrame where the value for A
is negative. We then use the returned PySpark DataFrame's count()
method to fetch the number of rows as an integer.
Counting the number of negative values in multiple columns
To count the number of negative values in multiple columns, we can use the selectExpr(~)
method which accepts SQL expressions as the argument:
cols = ['A', 'B']sql_expressions = [f'count(CASE WHEN ({col} < 0) THEN 1 END) AS {col}_count' for col in cols]
+-------+-------+|A_count|B_count|+-------+-------+| 2| 1|+-------+-------+
Here, we are using list comprehension to convert a list of column names to a list of SQL expressions:
cols = ['A', 'B']sql_expressions = [f'count(CASE WHEN ({col} < 0) THEN 1 END) AS {col}_count' for col in cols]print(sql_expressions)
['count(CASE WHEN (A < 0) THEN 1 END) AS A_count','count(CASE WHEN (B < 0) THEN 1 END) AS B_count']
Here, note the following:
we are using the
CASE WHEN
to map negative values to1
andnull
otherwise. Thecount(~)
SQL function only counts non-null values, and hence we are able to obtain the number of negative values.we are assigning column labels to the returned column of
count(~)
using the alias clauseAS
.
Counting the total number of negative values in multiple columns
To count the total number of negative values in multiple columns:
cols = ['A', 'B']sql_expressions = [f'count(CASE WHEN ({col} < 0) THEN 1 END) AS {col}_count' for col in cols]
3
Here, we first obtain the number of negative values of each column, as we have done in the previous section:
cols = ['A', 'B']sql_expressions = [f'count(CASE WHEN ({col} < 0) THEN 1 END) AS {col}_count' for col in cols]
+-------+-------+|A_count|B_count|+-------+-------+| 2| 1|+-------+-------+
We then convert this PySpark DataFrame data into a list of Row
objects using the collect()
method.
[Row(A_count=2, B_count=1)]
The resulting list in this case will always be of length one, and we access the inner Row
object using [0]
. We then convert the Row
object into a list:
Finally, we just the built-in sum(~)
method to compute the total number of negative values.