search
Search
Login
Unlock 100+ guides
menu
menu
web
search toc
close
Comments
Log in or sign up
Cancel
Post
account_circle
Profile
exit_to_app
Sign out
What does this mean?
Why is this true?
Give me some examples!
search
keyboard_voice
close
Searching Tips
Search for a recipe:
"Creating a table in MySQL"
Search for an API documentation: "@append"
Search for code: "!dataframe"
Apply a tag filter: "#python"
Useful Shortcuts
/ to open search panel
Esc to close search panel
to navigate between search results
d to clear all current filters
Enter to expand content preview
icon_star
Doc Search
icon_star
Code Search Beta
SORRY NOTHING FOUND!
mic
Start speaking...
Voice search is only supported in Safari and Chrome.
Navigate to

PySpark DataFrame | colRegex method

schedule Aug 12, 2023
Last updated
local_offer
PySpark
Tags
mode_heat
Master the mathematics behind data science with 100+ top-tier guides
Start your free 7-days trial now!

PySpark DataFrame's colRegex(~) method returns a Column object whose label match the specified regular expression. This method also allows multiple columns to be selected.

Parameters

1. colName | string

The regex to match the label of the columns.

Return Value

A PySpark Column.

Examples

Selecting columns using regular expression in PySpark

Consider the following PySpark DataFrame:

df = spark.createDataFrame([("Alex", 20), ("Bob", 30), ("Cathy", 40)], ["col1", "col2"])
df.show()
+-----+----+
| col1|col2|
+-----+----+
| Alex| 20|
| Bob| 30|
|Cathy| 40|
+-----+----+

To select columns using regular expression, use the colRegex(~) method:

df.select(df.colRegex("`col[123]`")).show()
+-----+----+
| col1|col2|
+-----+----+
| Alex| 20|
| Bob| 30|
|Cathy| 40|
+-----+----+

Here, note the following:

  • we wrapped the column label using backticks ` - this is required otherwise PySpark will throw an error.

  • the regular expression col[123] matches columns with label col1, col2 or col3.

  • the select(~) method is used to convert the Column object into a PySpark DataFrame.

Getting column labels that match regular expression as list of strings in PySpark

To get column labels as a list of strings instead of PySpark Column objects:

df.select(df.colRegex("`col[123]`")).columns
['col1', 'col2']

Here, we are using the columns property of the PySpark DataFrame returned by select(~).

robocat
Published by Isshin Inada
Edited by 0 others
Did you find this page useful?
thumb_up
thumb_down
Comment
Citation
Ask a question or leave a feedback...
thumb_up
2
thumb_down
1
chat_bubble_outline
0
settings
Enjoy our search
Hit / to insta-search docs and recipes!