Python Pandas Tutorial: A Complete Introduction for Beginners (2024)

Python Pandas Tutorial: A Complete Introduction for Beginners (1)

Python Pandas Tutorial: A Complete Introduction for Beginners (2)

Author: George McIntire
Data Scientist

Python Pandas Tutorial: A Complete Introduction for Beginners (3)

Author: Brendan Martin
Founder of LearnDataSci

Python Pandas Tutorial: A Complete Introduction for Beginners (4)

Author: Lauren Washington
Lead Data Scientist & ML Developer

Learn some of the most important pandas features for exploring, cleaning, transforming, visualizing, and learning from data.

Python Pandas Tutorial: A Complete Introduction for Beginners (5)

You should already know:

  • Python fundamentals – you should have beginner to intermediate-level knowledge, which can be learned from most entry-level Python courses

The pandas package is the most important tool at the disposal of Data Scientists and Analysts working in Python today. The powerful machine learning and glamorous visualization tools may get all the attention, but pandas is the backbone of most data projects.

[pandas] is derived from the term "panel data", an econometrics term for data sets that include observations over multiple time periods for the same individuals. — Wikipedia

If you're thinking about data science as a career, then it is imperative that one of the first things you do is learn pandas. In this post, we will go over the essential bits of information about pandas, including how to install it, its uses, and how it works with other common Python data analysis packages such as matplotlib and scikit-learn.

Article Resources

Other articles in this series

  • Applied Introduction to NumPy

What's Pandas for?

Pandas has so many uses that it might make sense to list the things it can't do instead of what it can do.

This tool is essentially your data’s home. Through pandas, you get acquainted with your data by cleaning, transforming, and analyzing it.

For example, say you want to explore a dataset stored in a CSV on your computer. Pandas will extract the data from that CSV into a DataFrame — a table, basically — then let you do things like:

  • Calculate statistics and answer questions about the data, like
    • What's the average, median, max, or min of each column?
    • Does column A correlate with column B?
    • What does the distribution of data in column C look like?
  • Clean the data by doing things like removing missing values and filtering rows or columns by some criteria
  • Visualize the data with help from Matplotlib. Plot bars, lines, histograms, bubbles, and more.
  • Store the cleaned, transformed data back into a CSV, other file or database

Before you jump into the modeling or the complex visualizations you need to have a good understanding of the nature of your dataset and pandas is the best avenue through which to do that.

Python Pandas Tutorial: A Complete Introduction for Beginners (6)

Not only is the pandas library a central component of the data science toolkit but it is used in conjunction with other libraries in that collection.

Pandas is built on top of the NumPy package, meaning a lot of the structure of NumPy is used or replicated in Pandas. Data in pandas is often used to feed statistical analysis in SciPy, plotting functions from Matplotlib, and machine learning algorithms in Scikit-learn.

Jupyter Notebooks offer a good environment for using pandas to do data exploration and modeling, but pandas can also be used in text editors just as easily.

Jupyter Notebooks give us the ability to execute code in a particular cell as opposed to running the entire file. This saves a lot of time when working with large datasets and complex transformations. Notebooks also provide an easy way to visualize pandas’ DataFrames and plots. As a matter of fact, this article was created entirely in a Jupyter Notebook.

When should you start using pandas?

If you do not have any experience coding in Python, then you should stay away from learning pandas until you do. You don’t have to be at the level of the software engineer, but you should be adept at the basics, such as lists, tuples, dictionaries, functions, and iterations. Also, I’d also recommend familiarizing yourself with NumPy due to the similarities mentioned above.

If you're looking for a good place to learn Python, Python for Everybody on Coursera is great (and Free).

{{ quizData.title }}

{{ quizData.subtitle }}

{{ idx+1 }} of {{ questions.length }}: {{ q.title }}

{{ q.question }}

{{ q.question }}

{{ a.answer }}

{{ a.answer }}

{{ a.answer }}

{{ a.answer }}

{{ quizData.thresholds[studentLevel].end.title }}

You got {{ totalCorrect }} of {{ questions.length }} correct.

{{ quizData.thresholds[studentLevel].end.subtitle }}

{{ course.type }}

{{ course.title }}

{{ course.subtitle }}

{{ course.cta }}

Prev Next Next

Reset

SCORE

Skip to end

Moreover, for those of you looking to do a data science bootcamp or some other accelerated data science education program, it's highly recommended you start learning pandas on your own before you start the program.

Even though accelerated programs teach you pandas, better skills beforehand means you'll be able to maximize time for learning and mastering the more complicated material.

Pandas First Steps

Install and import

Pandas is an easy package to install. Open up your terminal program (for Mac users) or command line (for PC users) and install it using either of the following commands:

conda install pandas

OR

pip install pandas

Alternatively, if you're currently viewing this article in a Jupyter notebook you can run this cell:

!pip install pandas
Learn Data Science with

The ! at the beginning runs cells as if they were in a terminal.

To import pandas we usually import it with a shorter name since it's used so much:

import pandas as pd
Learn Data Science with

Now to the basic components of pandas.

Core components of pandas: Series and DataFrames

The primary two components of pandas are the Series and DataFrame.

A Series is essentially a column, and a DataFrame is a multi-dimensional table made up of a collection of Series.

Python Pandas Tutorial: A Complete Introduction for Beginners (9)

DataFrames and Series are quite similar in that many operations that you can do with one you can do with the other, such as filling in null values and calculating the mean.

You'll see how these components work when we start working with data below.

Creating DataFrames from scratch

Creating DataFrames right in Python is good to know and quite useful when testing new methods and functions you find in the pandas docs.

There are many ways to create a DataFrame from scratch, but a great option is to just use a simple dict.

Let's say we have a fruit stand that sells apples and oranges. We want to have a column for each fruit and a row for each customer purchase. To organize this as a dictionary for pandas we could do something like:

data = { 'apples': [3, 2, 0, 1], 'oranges': [0, 3, 7, 2]}
Learn Data Science with

And then pass it to the pandas DataFrame constructor:

purchases = pd.DataFrame(data)purchases
Learn Data Science with

Out:

applesoranges
030
123
207
312

How did that work?

Each (key, value) item in data corresponds to a column in the resulting DataFrame.

The Index of this DataFrame was given to us on creation as the numbers 0-3, but we could also create our own when we initialize the DataFrame.

Let's have customer names as our index:

purchases = pd.DataFrame(data, index=['June', 'Robert', 'Lily', 'David'])purchases
Learn Data Science with

Out:

applesoranges
June30
Robert23
Lily07
David12

So now we could locate a customer's order by using their name:

purchases.loc['June']
Learn Data Science with

Out:

apples 3oranges 0Name: June, dtype: int64
Learn Data Science with

There's more on locating and extracting data from the DataFrame later, but now you should be able to create a DataFrame with any random data to learn on.

Let's move on to some quick methods for creating DataFrames from various other sources.

Want to learn more?

See Best Data Science Courses

View

How to read in data

It’s quite simple to load data from various file formats into a DataFrame. In the following examples we'll keep using our apples and oranges data, but this time it's coming from various files.

Reading data from CSVs

With CSV files all you need is a single line to load in the data:

df = pd.read_csv('purchases.csv')df
Learn Data Science with

Out:

Unnamed: 0applesoranges
0June30
1Robert23
2Lily07
3David12

CSVs don't have indexes like our DataFrames, so all we need to do is just designate the index_col when reading:

df = pd.read_csv('purchases.csv', index_col=0)df
Learn Data Science with

Here we're setting the index to be column zero.

You'll find that most CSVs won't ever have an index column and so usually you don't have to worry about this step.

Reading data from JSON

If you have a JSON file — which is essentially a stored Python dict — pandas can read this just as easily:

df = pd.read_json('purchases.json')df
Learn Data Science with

Out:

applesoranges
David12
June30
Lily07
Robert23

Notice this time our index came with us correctly since using JSON allowed indexes to work through nesting. Feel free to open data_file.json in a notepad so you can see how it works.

Pandas will try to figure out how to create a DataFrame by analyzing structure of your JSON, and sometimes it doesn't get it right. Often you'll need to set the orient keyword argument depending on the structure, so check out read_json docs about that argument to see which orientation you're using.

Reading data from a SQL database

If you’re working with data from a SQL database you need to first establish a connection using an appropriate Python library, then pass a query to pandas. Here we'll use SQLite to demonstrate.

First, we need pysqlite3 installed, so run this command in your terminal:

pip install pysqlite3

Or run this cell if you're in a notebook:

!pip install pysqlite3
Learn Data Science with

sqlite3 is used to create a connection to a database which we can then use to generate a DataFrame through a SELECT query.

So first we'll make a connection to a SQLite database file:

import sqlite3con = sqlite3.connect("database.db")
Learn Data Science with

SQL Tip

If you have data in PostgreSQL, MySQL, or some other SQL server, you'll need to obtain the right Python library to make a connection. For example, psycopg2 (link) is a commonly used library for making connections to PostgreSQL. Furthermore, you would make a connection to a database URI instead of a file like we did here with SQLite.

For a great course on SQL check out The Complete SQL Bootcamp on Udemy

In this SQLite database we have a table called purchases, and our index is in a column called "index".

By passing a SELECT query and our con, we can read from the purchases table:

df = pd.read_sql_query("SELECT * FROM purchases", con)df
Learn Data Science with

Out:

indexapplesoranges
0June30
1Robert23
2Lily07
3David12

Just like with CSVs, we could pass index_col='index', but we can also set an index after-the-fact:

df = df.set_index('index')df
Learn Data Science with

Out:

applesoranges
index
June30
Robert23
Lily07
David12

In fact, we could use set_index() on any DataFrame using any column at any time. Indexing Series and DataFrames is a very common task, and the different ways of doing it is worth remembering.

Converting back to a CSV, JSON, or SQL

So after extensive work on cleaning your data, you’re now ready to save it as a file of your choice. Similar to the ways we read in data, pandas provides intuitive commands to save it:

df.to_csv('new_purchases.csv')df.to_json('new_purchases.json')df.to_sql('new_purchases', con)
Learn Data Science with

When we save JSON and CSV files, all we have to input into those functions is our desired filename with the appropriate file extension. With SQL, we’re not creating a new file but instead inserting a new table into the database using our con variable from before.

Let's move on to importing some real-world data and detailing a few of the operations you'll be using a lot.

Most important DataFrame operations

DataFrames possess hundreds of methods and other operations that are crucial to any analysis. As a beginner, you should know the operations that perform simple transformations of your data and those that provide fundamental statistical analysis.

Let's load in the IMDB movies dataset to begin:

movies_df = pd.read_csv("IMDB-Movie-Data.csv", index_col="Title")
Learn Data Science with

We're loading this dataset from a CSV and designating the movie titles to be our index.

Viewing your data

The first thing to do when opening a new dataset is print out a few rows to keep as a visual reference. We accomplish this with .head():

movies_df.head()
Learn Data Science with

Out:

RankGenreDescriptionDirectorActorsYearRuntime (Minutes)RatingVotesRevenue (Millions)Metascore
Title
Guardians of the Galaxy1Action,Adventure,Sci-FiA group of intergalactic criminals are forced ...James GunnChris Pratt, Vin Diesel, Bradley Cooper, Zoe S...20141218.1757074333.1376.0
Prometheus2Adventure,Mystery,Sci-FiFollowing clues to the origin of mankind, a te...Ridley ScottNoomi Rapace, Logan Marshall-Green, Michael Fa...20121247.0485820126.4665.0
Split3Horror,ThrillerThree girls are kidnapped by a man with a diag...M. Night ShyamalanJames McAvoy, Anya Taylor-Joy, Haley Lu Richar...20161177.3157606138.1262.0
Sing4Animation,Comedy,FamilyIn a city of humanoid animals, a hustling thea...Christophe LourdeletMatthew McConaughey,Reese Witherspoon, Seth Ma...20161087.260545270.3259.0
Suicide Squad5Action,Adventure,FantasyA secret government agency recruits some of th...David AyerWill Smith, Jared Leto, Margot Robbie, Viola D...20161236.2393727325.0240.0

.head() outputs the first five rows of your DataFrame by default, but we could also pass a number as well: movies_df.head(10) would output the top ten rows, for example.

To see the last five rows use .tail(). tail() also accepts a number, and in this case we printing the bottom two rows.:

movies_df.tail(2)
Learn Data Science with

Out:

RankGenreDescriptionDirectorActorsYearRuntime (Minutes)RatingVotesRevenue (Millions)Metascore
Title
Search Party999Adventure,ComedyA pair of friends embark on a mission to reuni...Scot ArmstrongAdam Pally, T.J. Miller, Thomas Middleditch,Sh...2014935.64881NaN22.0
Nine Lives1000Comedy,Family,FantasyA stuffy businessman finds himself trapped ins...Barry SonnenfeldKevin Spacey, Jennifer Garner, Robbie Amell,Ch...2016875.31243519.6411.0

Typically when we load in a dataset, we like to view the first five or so rows to see what's under the hood. Here we can see the names of each column, the index, and examples of values in each row.

You'll notice that the index in our DataFrame is the Title column, which you can tell by how the word Title is slightly lower than the rest of the columns.

Getting info about your data

.info() should be one of the very first commands you run after loading your data:

movies_df.info()
Learn Data Science with

Out:

<class 'pandas.core.frame.DataFrame'>Index: 1000 entries, Guardians of the Galaxy to Nine LivesData columns (total 11 columns):Rank 1000 non-null int64Genre 1000 non-null objectDescription 1000 non-null objectDirector 1000 non-null objectActors 1000 non-null objectYear 1000 non-null int64Runtime (Minutes) 1000 non-null int64Rating 1000 non-null float64Votes 1000 non-null int64Revenue (Millions) 872 non-null float64Metascore 936 non-null float64dtypes: float64(3), int64(4), object(4)memory usage: 93.8+ KB
Learn Data Science with

.info() provides the essential details about your dataset, such as the number of rows and columns, the number of non-null values, what type of data is in each column, and how much memory your DataFrame is using.

Notice in our movies dataset we have some obvious missing values in the Revenue and Metascore columns. We'll look at how to handle those in a bit.

Seeing the datatype quickly is actually quite useful. Imagine you just imported some JSON and the integers were recorded as strings. You go to do some arithmetic and find an "unsupported operand" Exception because you can't do math with strings. Calling .info() will quickly point out that your column you thought was all integers are actually string objects.

Another fast and useful attribute is .shape, which outputs just a tuple of (rows, columns):

movies_df.shape
Learn Data Science with

Out:

(1000, 11)
Learn Data Science with

Note that .shape has no parentheses and is a simple tuple of format (rows, columns). So we have 1000 rows and 11 columns in our movies DataFrame.

You'll be going to .shape a lot when cleaning and transforming data. For example, you might filter some rows based on some criteria and then want to know quickly how many rows were removed.

Handling duplicates

This dataset does not have duplicate rows, but it is always important to verify you aren't aggregating duplicate rows.

To demonstrate, let's simply just double up our movies DataFrame by appending it to itself:

temp_df = movies_df.append(movies_df)temp_df.shape
Learn Data Science with

Out:

(2000, 11)
Learn Data Science with

Using append() will return a copy without affecting the original DataFrame. We are capturing this copy in temp so we aren't working with the real data.

Notice call .shape quickly proves our DataFrame rows have doubled.

Now we can try dropping duplicates:

temp_df = temp_df.drop_duplicates()temp_df.shape
Learn Data Science with

Out:

(1000, 11)
Learn Data Science with

Just like append(), the drop_duplicates() method will also return a copy of your DataFrame, but this time with duplicates removed. Calling .shape confirms we're back to the 1000 rows of our original dataset.

It's a little verbose to keep assigning DataFrames to the same variable like in this example. For this reason, pandas has the inplace keyword argument on many of its methods. Using inplace=True will modify the DataFrame object in place:

temp_df.drop_duplicates(inplace=True)
Learn Data Science with

Now our temp_df will have the transformed data automatically.

Another important argument for drop_duplicates() is keep, which has three possible options:

  • first: (default) Drop duplicates except for the first occurrence.
  • last: Drop duplicates except for the last occurrence.
  • False: Drop all duplicates.

Since we didn't define the keep arugment in the previous example it was defaulted to first. This means that if two rows are the same pandas will drop the second row and keep the first row. Using last has the opposite effect: the first row is dropped.

keep, on the other hand, will drop all duplicates. If two rows are the same then both will be dropped. Watch what happens to temp_df:

temp_df = movies_df.append(movies_df) # make a new copytemp_df.drop_duplicates(inplace=True, keep=False)temp_df.shape
Learn Data Science with

Out:

(0, 11)
Learn Data Science with

Since all rows were duplicates, keep=False dropped them all resulting in zero rows being left over. If you're wondering why you would want to do this, one reason is that it allows you to locate all duplicates in your dataset. When conditional selections are shown below you'll see how to do that.

Column cleanup

Many times datasets will have verbose column names with symbols, upper and lowercase words, spaces, and typos. To make selecting data by column name easier we can spend a little time cleaning up their names.

Here's how to print the column names of our dataset:

movies_df.columns
Learn Data Science with

Out:

Index(['Rank', 'Genre', 'Description', 'Director', 'Actors', 'Year', 'Runtime (Minutes)', 'Rating', 'Votes', 'Revenue (Millions)', 'Metascore'], dtype='object')
Learn Data Science with

Not only does .columns come in handy if you want to rename columns by allowing for simple copy and paste, it's also useful if you need to understand why you are receiving a Key Error when selecting data by column.

We can use the .rename() method to rename certain or all columns via a dict. We don't want parentheses, so let's rename those:

movies_df.rename(columns={ 'Runtime (Minutes)': 'Runtime', 'Revenue (Millions)': 'Revenue_millions' }, inplace=True)movies_df.columns
Learn Data Science with

Out:

Index(['Rank', 'Genre', 'Description', 'Director', 'Actors', 'Year', 'Runtime', 'Rating', 'Votes', 'Revenue_millions', 'Metascore'], dtype='object')
Learn Data Science with

Excellent. But what if we want to lowercase all names? Instead of using .rename() we could also set a list of names to the columns like so:

movies_df.columns = ['rank', 'genre', 'description', 'director', 'actors', 'year', 'runtime', 'rating', 'votes', 'revenue_millions', 'metascore']movies_df.columns
Learn Data Science with

Out:

Index(['rank', 'genre', 'description', 'director', 'actors', 'year', 'runtime', 'rating', 'votes', 'revenue_millions', 'metascore'], dtype='object')
Learn Data Science with

But that's too much work. Instead of just renaming each column manually we can do a list comprehension:

movies_df.columns = [col.lower() for col in movies_df]movies_df.columns
Learn Data Science with

Out:

Index(['rank', 'genre', 'description', 'director', 'actors', 'year', 'runtime', 'rating', 'votes', 'revenue_millions', 'metascore'], dtype='object')
Learn Data Science with

list (and dict) comprehensions come in handy a lot when working with pandas and data in general.

It's a good idea to lowercase, remove special characters, and replace spaces with underscores if you'll be working with a dataset for some time.

How to work with missing values

When exploring data, you’ll most likely encounter missing or null values, which are essentially placeholders for non-existent values. Most commonly you'll see Python's None or NumPy's np.nan, each of which are handled differently in some situations.

There are two options in dealing with nulls:

  1. Get rid of rows or columns with nulls
  2. Replace nulls with non-null values, a technique known as imputation

Let's calculate to total number of nulls in each column of our dataset. The first step is to check which cells in our DataFrame are null:

movies_df.isnull()
Learn Data Science with

Out:

rankgenredescriptiondirectoractorsyearruntimeratingvotesrevenue_millionsmetascore
Title
Guardians of the GalaxyFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalse
PrometheusFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalse
SplitFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalse
SingFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalse
Suicide SquadFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalse

Notice isnull() returns a DataFrame where each cell is either True or False depending on that cell's null status.

To count the number of nulls in each column we use an aggregate function for summing:

movies_df.isnull().sum()
Learn Data Science with

Out:

rank 0genre 0description 0director 0actors 0year 0runtime 0rating 0votes 0revenue_millions 128metascore 64dtype: int64
Learn Data Science with

.isnull() just by iteself isn't very useful, and is usually used in conjunction with other methods, like sum().

We can see now that our data has 128 missing values for revenue_millions and 64 missing values for metascore.

Removing null values

Data Scientists and Analysts regularly face the dilemma of dropping or imputing null values, and is a decision that requires intimate knowledge of your data and its context. Overall, removing null data is only suggested if you have a small amount of missing data.

Remove nulls is pretty simple:

movies_df.dropna()
Learn Data Science with

This operation will delete any row with at least a single null value, but it will return a new DataFrame without altering the original one. You could specify inplace=True in this method as well.

So in the case of our dataset, this operation would remove 128 rows where revenue_millions is null and 64 rows where metascore is null. This obviously seems like a waste since there's perfectly good data in the other columns of those dropped rows. That's why we'll look at imputation next.

Other than just dropping rows, you can also drop columns with null values by setting axis=1:

movies_df.dropna(axis=1)
Learn Data Science with

In our dataset, this operation would drop the revenue_millions and metascore columns

Intuition

What's with this axis=1parameter?

It's not immediately obvious where axis comes from and why you need it to be 1 for it to affect columns. To see why, just look at the .shape output:

movies_df.shape

Out: (1000, 11)

As we learned above, this is a tuple that represents the shape of the DataFrame, i.e. 1000 rows and 11 columns. Note that the rows are at index zero of this tuple and columns are at index one of this tuple. This is why axis=1 affects columns. This comes from NumPy, and is a great example of why learning NumPy is worth your time.

Imputation

Imputation is a conventional feature engineering technique used to keep valuable data that have null values.

There may be instances where dropping every row with a null value removes too big a chunk from your dataset, so instead we can impute that null with another value, usually the mean or the median of that column.

Let's look at imputing the missing values in the revenue_millions column. First we'll extract that column into its own variable:

revenue = movies_df['revenue_millions']
Learn Data Science with

Using square brackets is the general way we select columns in a DataFrame.

If you remember back to when we created DataFrames from scratch, the keys of the dict ended up as column names. Now when we select columns of a DataFrame, we use brackets just like if we were accessing a Python dictionary.

revenue now contains a Series:

revenue.head()
Learn Data Science with

Out:

TitleGuardians of the Galaxy 333.13Prometheus 126.46Split 138.12Sing 270.32Suicide Squad 325.02Name: revenue_millions, dtype: float64
Learn Data Science with

Slightly different formatting than a DataFrame, but we still have our Title index.

We'll impute the missing values of revenue using the mean. Here's the mean value:

revenue_mean = revenue.mean()revenue_mean
Learn Data Science with

Out:

82.95637614678897
Learn Data Science with

With the mean, let's fill the nulls using fillna():

revenue.fillna(revenue_mean, inplace=True)
Learn Data Science with

We have now replaced all nulls in revenue with the mean of the column. Notice that by using inplace=True we have actually affected the original movies_df:

movies_df.isnull().sum()
Learn Data Science with

Out:

rank 0genre 0description 0director 0actors 0year 0runtime 0rating 0votes 0revenue_millions 0metascore 64dtype: int64
Learn Data Science with

Imputing an entire column with the same value like this is a basic example. It would be a better idea to try a more granular imputation by Genre or Director.

For example, you would find the mean of the revenue generated in each genre individually and impute the nulls in each genre with that genre's mean.

Let's now look at more ways to examine and understand the dataset.

Understanding your variables

Using describe() on an entire DataFrame we can get a summary of the distribution of continuous variables:

movies_df.describe()
Learn Data Science with

Out:

rankyearruntimeratingvotesrevenue_millionsmetascore
count1000.0000001000.0000001000.0000001000.0000001.000000e+031000.000000936.000000
mean500.5000002012.783000113.1720006.7232001.698083e+0582.95637658.985043
std288.8194363.20596218.8109080.9454291.887626e+0596.41204317.194757
min1.0000002006.00000066.0000001.9000006.100000e+010.00000011.000000
25%250.7500002010.000000100.0000006.2000003.630900e+0417.44250047.000000
50%500.5000002014.000000111.0000006.8000001.107990e+0560.37500059.500000
75%750.2500002016.000000123.0000007.4000002.399098e+0599.17750072.000000
max1000.0000002016.000000191.0000009.0000001.791916e+06936.630000100.000000

Understanding which numbers are continuous also comes in handy when thinking about the type of plot to use to represent your data visually.

.describe() can also be used on a categorical variable to get the count of rows, unique count of categories, top category, and freq of top category:

movies_df['genre'].describe()
Learn Data Science with

Out:

count 1000unique 207top Action,Adventure,Sci-Fifreq 50Name: genre, dtype: object
Learn Data Science with

This tells us that the genre column has 207 unique values, the top value is Action/Adventure/Sci-Fi, which shows up 50 times (freq).

.value_counts() can tell us the frequency of all values in a column:

movies_df['genre'].value_counts().head(10)
Learn Data Science with

Out:

Action,Adventure,Sci-Fi 50Drama 48Comedy,Drama,Romance 35Comedy 32Drama,Romance 31Action,Adventure,Fantasy 27Comedy,Drama 27Animation,Adventure,Comedy 27Comedy,Romance 26Crime,Drama,Thriller 24Name: genre, dtype: int64
Learn Data Science with

Relationships between continuous variables

By using the correlation method .corr() we can generate the relationship between each continuous variable:

movies_df.corr()
Learn Data Science with
rankyearruntimeratingvotesrevenue_millionsmetascore
rank1.000000-0.261605-0.221739-0.219555-0.283876-0.252996-0.191869
year-0.2616051.000000-0.164900-0.211219-0.411904-0.117562-0.079305
runtime-0.221739-0.1649001.0000000.3922140.4070620.2478340.211978
rating-0.219555-0.2112190.3922141.0000000.5115370.1895270.631897
votes-0.283876-0.4119040.4070620.5115371.0000000.6079410.325684
revenue_millions-0.252996-0.1175620.2478340.1895270.6079411.0000000.133328
metascore-0.191869-0.0793050.2119780.6318970.3256840.1333281.000000

Correlation tables are a numerical representation of the bivariate relationships in the dataset.

Positive numbers indicate a positive correlation — one goes up the other goes up — and negative numbers represent an inverse correlation — one goes up the other goes down. 1.0 indicates a perfect correlation.

So looking in the first row, first column we see rank has a perfect correlation with itself, which is obvious. On the other hand, the correlation between votes and revenue_millions is 0.6. A little more interesting.

Examining bivariate relationships comes in handy when you have an outcome or dependent variable in mind and would like to see the features most correlated to the increase or decrease of the outcome. You can visually represent bivariate relationships with scatterplots (seen below in the plotting section).

For a deeper look into data summarizations check out Essential Statistics for Data Science.

Let's now look more at manipulating DataFrames.

DataFrame slicing, selecting, extracting

Up until now we've focused on some basic summaries of our data. We've learned about simple column extraction using single brackets, and we imputed null values in a column using fillna(). Below are the other methods of slicing, selecting, and extracting you'll need to use constantly.

It's important to note that, although many methods are the same, DataFrames and Series have different attributes, so you'll need be sure to know which type you are working with or else you will receive attribute errors.

Let's look at working with columns first.

By column

You already saw how to extract a column using square brackets like this:

genre_col = movies_df['genre']type(genre_col)
Learn Data Science with

Out:

pandas.core.series.Series
Learn Data Science with

This will return a Series. To extract a column as a DataFrame, you need to pass a list of column names. In our case that's just a single column:

genre_col = movies_df[['genre']]type(genre_col)
Learn Data Science with
pandas.core.frame.DataFrame
Learn Data Science with

Since it's just a list, adding another column name is easy:

subset = movies_df[['genre', 'rating']]subset.head()
Learn Data Science with

Out:

genrerating
Title
Guardians of the GalaxyAction,Adventure,Sci-Fi8.1
PrometheusAdventure,Mystery,Sci-Fi7.0
SplitHorror,Thriller7.3
SingAnimation,Comedy,Family7.2
Suicide SquadAction,Adventure,Fantasy6.2

Now we'll look at getting data by rows.

By rows

For rows, we have two options:

  • .loc - locates by name
  • .iloc- locates by numerical index

Remember that we are still indexed by movie Title, so to use .loc we give it the Title of a movie:

prom = movies_df.loc["Prometheus"]prom
Learn Data Science with

Out:

rank 2genre Adventure,Mystery,Sci-Fidescription Following clues to the origin of mankind, a te...director Ridley Scottactors Noomi Rapace, Logan Marshall-Green, Michael Fa...year 2012runtime 124rating 7votes 485820revenue_millions 126.46metascore 65Name: Prometheus, dtype: object
Learn Data Science with

On the other hand, with iloc we give it the numerical index of Prometheus:

prom = movies_df.iloc[1]
Learn Data Science with

loc and iloc can be thought of as similar to Python list slicing. To show this even further, let's select multiple rows.

How would you do it with a list? In Python, just slice with brackets like example_list[1:4]. It's works the same way in pandas:

movie_subset = movies_df.loc['Prometheus':'Sing']movie_subset = movies_df.iloc[1:4]movie_subset
Learn Data Science with

Out:

rankgenredescriptiondirectoractorsyearruntimeratingvotesrevenue_millionsmetascore
Title
Prometheus2Adventure,Mystery,Sci-FiFollowing clues to the origin of mankind, a te...Ridley ScottNoomi Rapace, Logan Marshall-Green, Michael Fa...20121247.0485820126.4665.0
Split3Horror,ThrillerThree girls are kidnapped by a man with a diag...M. Night ShyamalanJames McAvoy, Anya Taylor-Joy, Haley Lu Richar...20161177.3157606138.1262.0
Sing4Animation,Comedy,FamilyIn a city of humanoid animals, a hustling thea...Christophe LourdeletMatthew McConaughey,Reese Witherspoon, Seth Ma...20161087.260545270.3259.0

One important distinction between using .loc and .iloc to select multiple rows is that .locincludes the movie Sing in the result, but when using .iloc we're getting rows 1:4 but the movie at index 4 (Suicide Squad) is not included.

Slicing with .iloc follows the same rules as slicing with lists, the object at the index at the end is not included.

Conditional selections

We’ve gone over how to select columns and rows, but what if we want to make a conditional selection?

For example, what if we want to filter our movies DataFrame to show only films directed by Ridley Scott or films with a rating greater than or equal to 8.0?

To do that, we take a column from the DataFrame and apply a Boolean condition to it. Here's an example of a Boolean condition:

condition = (movies_df['director'] == "Ridley Scott")condition.head()
Learn Data Science with

Out:

TitleGuardians of the Galaxy FalsePrometheus TrueSplit FalseSing FalseSuicide Squad FalseName: director, dtype: bool
Learn Data Science with

Similar to isnull(), this returns a Series of True and False values: True for films directed by Ridley Scott and False for ones not directed by him.

We want to filter out all movies not directed by Ridley Scott, in other words, we don’t want the False films. To return the rows where that condition is True we have to pass this operation into the DataFrame:

movies_df[movies_df['director'] == "Ridley Scott"]
Learn Data Science with

Out:

rankgenredescriptiondirectoractorsyearruntimeratingvotesrevenue_millionsmetascorerating_category
Title
Prometheus2Adventure,Mystery,Sci-FiFollowing clues to the origin of mankind, a te...Ridley ScottNoomi Rapace, Logan Marshall-Green, Michael Fa...20121247.0485820126.4665.0bad
The Martian103Adventure,Drama,Sci-FiAn astronaut becomes stranded on Mars after hi...Ridley ScottMatt Damon, Jessica Chastain, Kristen Wiig, Ka...20151448.0556097228.4380.0good
Robin Hood388Action,Adventure,DramaIn 12th century England, Robin and his band of...Ridley ScottRussell Crowe, Cate Blanchett, Matthew Macfady...20101406.7221117105.2253.0bad
American Gangster471Biography,Crime,DramaIn 1970s America, a detective works to bring d...Ridley ScottDenzel Washington, Russell Crowe, Chiwetel Eji...20071577.8337835130.1376.0bad
Exodus: Gods and Kings517Action,Adventure,DramaThe defiant leader Moses rises up against the ...Ridley ScottChristian Bale, Joel Edgerton, Ben Kingsley, S...20141506.013729965.0152.0bad

You can get used to looking at these conditionals by reading it like:

Select movies_df where movies_df director equals Ridley Scott.

Let's look at conditional selections using numerical values by filtering the DataFrame by ratings:

movies_df[movies_df['rating'] >= 8.6].head(3)
Learn Data Science with

Out:

rankgenredescriptiondirectoractorsyearruntimeratingvotesrevenue_millionsmetascore
Title
Interstellar37Adventure,Drama,Sci-FiA team of explorers travel through a wormhole ...Christopher NolanMatthew McConaughey, Anne Hathaway, Jessica Ch...20141698.61047747187.9974.0
The Dark Knight55Action,Crime,DramaWhen the menace known as the Joker wreaks havo...Christopher NolanChristian Bale, Heath Ledger, Aaron Eckhart,Mi...20081529.01791916533.3282.0
Inception81Action,Adventure,Sci-FiA thief, who steals corporate secrets through ...Christopher NolanLeonardo DiCaprio, Joseph Gordon-Levitt, Ellen...20101488.81583625292.5774.0

We can make some richer conditionals by using logical operators | for "or" and & for "and".

Let's filter the the DataFrame to show only movies by Christopher Nolan OR Ridley Scott:

movies_df[(movies_df['director'] == 'Christopher Nolan') | (movies_df['director'] == 'Ridley Scott')].head()
Learn Data Science with

Out:

rankgenredescriptiondirectoractorsyearruntimeratingvotesrevenue_millionsmetascore
Title
Prometheus2Adventure,Mystery,Sci-FiFollowing clues to the origin of mankind, a te...Ridley ScottNoomi Rapace, Logan Marshall-Green, Michael Fa...20121247.0485820126.4665.0
Interstellar37Adventure,Drama,Sci-FiA team of explorers travel through a wormhole ...Christopher NolanMatthew McConaughey, Anne Hathaway, Jessica Ch...20141698.61047747187.9974.0
The Dark Knight55Action,Crime,DramaWhen the menace known as the Joker wreaks havo...Christopher NolanChristian Bale, Heath Ledger, Aaron Eckhart,Mi...20081529.01791916533.3282.0
The Prestige65Drama,Mystery,Sci-FiTwo stage magicians engage in competitive one-...Christopher NolanChristian Bale, Hugh Jackman, Scarlett Johanss...20061308.591315253.0866.0
Inception81Action,Adventure,Sci-FiA thief, who steals corporate secrets through ...Christopher NolanLeonardo DiCaprio, Joseph Gordon-Levitt, Ellen...20101488.81583625292.5774.0

We need to make sure to group evaluations with parentheses so Python knows how to evaluate the conditional.

Using the isin() method we could make this more concise though:

movies_df[movies_df['director'].isin(['Christopher Nolan', 'Ridley Scott'])].head()
Learn Data Science with

Out:

rankgenredescriptiondirectoractorsyearruntimeratingvotesrevenue_millionsmetascore
Title
Prometheus2Adventure,Mystery,Sci-FiFollowing clues to the origin of mankind, a te...Ridley ScottNoomi Rapace, Logan Marshall-Green, Michael Fa...20121247.0485820126.4665.0
Interstellar37Adventure,Drama,Sci-FiA team of explorers travel through a wormhole ...Christopher NolanMatthew McConaughey, Anne Hathaway, Jessica Ch...20141698.61047747187.9974.0
The Dark Knight55Action,Crime,DramaWhen the menace known as the Joker wreaks havo...Christopher NolanChristian Bale, Heath Ledger, Aaron Eckhart,Mi...20081529.01791916533.3282.0
The Prestige65Drama,Mystery,Sci-FiTwo stage magicians engage in competitive one-...Christopher NolanChristian Bale, Hugh Jackman, Scarlett Johanss...20061308.591315253.0866.0
Inception81Action,Adventure,Sci-FiA thief, who steals corporate secrets through ...Christopher NolanLeonardo DiCaprio, Joseph Gordon-Levitt, Ellen...20101488.81583625292.5774.0

Let's say we want all movies that were released between 2005 and 2010, have a rating above 8.0, but made below the 25th percentile in revenue.

Here's how we could do all of that:

movies_df[ ((movies_df['year'] >= 2005) & (movies_df['year'] <= 2010)) & (movies_df['rating'] > 8.0) & (movies_df['revenue_millions'] < movies_df['revenue_millions'].quantile(0.25))]
Learn Data Science with

Out:

rankgenredescriptiondirectoractorsyearruntimeratingvotesrevenue_millionsmetascore
Title
3 Idiots431Comedy,DramaTwo friends are searching for their long lost ...Rajkumar HiraniAamir Khan, Madhavan, Mona Singh, Sharman Joshi20091708.42387896.5267.0
The Lives of Others477Drama,ThrillerIn 1984 East Berlin, an agent of the secret po...Florian Henckel von DonnersmarckUlrich Mühe, Martina Gedeck,Sebastian Koch, Ul...20061378.527810311.2889.0
Incendies714Drama,Mystery,WarTwins journey to the Middle East to discover t...Denis VilleneuveLubna Azabal, Mélissa Désormeaux-Poulin, Maxim...20101318.2928636.8680.0
Taare Zameen Par992Drama,Family,MusicAn eight-year-old boy is thought to be a lazy ...Aamir KhanDarsheel Safary, Aamir Khan, Tanay Chheda, Sac...20071658.51026971.2042.0

If you recall up when we used .describe() the 25th percentile for revenue was about 17.4, and we can access this value directly by using the quantile() method with a float of 0.25.

So here we have only four movies that match that criteria.

Applying functions

It is possible to iterate over a DataFrame or Series as you would with a list, but doing so — especially on large datasets — is very slow.

An efficient alternative is to apply() a function to the dataset. For example, we could use a function to convert movies with an 8.0 or greater to a string value of "good" and the rest to "bad" and use this transformed values to create a new column.

First we would create a function that, when given a rating, determines if it's good or bad:

def rating_function(x): if x >= 8.0: return "good" else: return "bad"
Learn Data Science with

Now we want to send the entire rating column through this function, which is what apply() does:

movies_df["rating_category"] = movies_df["rating"].apply(rating_function)movies_df.head(2)
Learn Data Science with

Out:

rankgenredescriptiondirectoractorsyearruntimeratingvotesrevenue_millionsmetascorerating_category
Title
Guardians of the Galaxy1Action,Adventure,Sci-FiA group of intergalactic criminals are forced ...James GunnChris Pratt, Vin Diesel, Bradley Cooper, Zoe S...20141218.1757074333.1376.0good
Prometheus2Adventure,Mystery,Sci-FiFollowing clues to the origin of mankind, a te...Ridley ScottNoomi Rapace, Logan Marshall-Green, Michael Fa...20121247.0485820126.4665.0bad

The .apply() method passes every value in the rating column through the rating_function and then returns a new Series. This Series is then assigned to a new column called rating_category.

You can also use anonymous functions as well. This lambda function achieves the same result as rating_function:

movies_df["rating_category"] = movies_df["rating"].apply(lambda x: 'good' if x >= 8.0 else 'bad')movies_df.head(2)
Learn Data Science with

Out:

rankgenredescriptiondirectoractorsyearruntimeratingvotesrevenue_millionsmetascorerating_category
Title
Guardians of the Galaxy1Action,Adventure,Sci-FiA group of intergalactic criminals are forced ...James GunnChris Pratt, Vin Diesel, Bradley Cooper, Zoe S...20141218.1757074333.1376.0good
Prometheus2Adventure,Mystery,Sci-FiFollowing clues to the origin of mankind, a te...Ridley ScottNoomi Rapace, Logan Marshall-Green, Michael Fa...20121247.0485820126.4665.0bad

Overall, using apply() will be much faster than iterating manually over rows because pandas is utilizing vectorization.

Vectorization: a style of computer programming where operations are applied to whole arrays instead of individual elements —Wikipedia

A good example of high usage of apply() is during natural language processing (NLP) work. You'll need to apply all sorts of text cleaning functions to strings to prepare for machine learning.

Brief Plotting

Another great thing about pandas is that it integrates with Matplotlib, so you get the ability to plot directly off DataFrames and Series. To get started we need to import Matplotlib (pip install matplotlib):

import matplotlib.pyplot as pltplt.rcParams.update({'font.size': 20, 'figure.figsize': (10, 8)}) # set font and plot size to be larger
Learn Data Science with

Now we can begin. There won't be a lot of coverage on plotting, but it should be enough to explore you're data easily.

Plotting Tip

For categorical variables utilize Bar Charts* and Boxplots.

For continuous variables utilize Histograms, Scatterplots, Line graphs, and Boxplots.

Let's plot the relationship between ratings and revenue. All we need to do is call .plot() on movies_df with some info about how to construct the plot:

movies_df.plot(kind='scatter', x='rating', y='revenue_millions', title='Revenue (millions) vs Rating');
Learn Data Science with

RESULT:

Python Pandas Tutorial: A Complete Introduction for Beginners (85)

What's with the semicolon? It's not a syntax error, just a way to hide the <matplotlib.axes._subplots.AxesSubplot at 0x26613b5cc18> output when plotting in Jupyter notebooks.

If we want to plot a simple Histogram based on a single column, we can call plot on a column:

movies_df['rating'].plot(kind='hist', title='Rating');
Learn Data Science with

RESULT:

Python Pandas Tutorial: A Complete Introduction for Beginners (87)

Do you remember the .describe() example at the beginning of this tutorial? Well, there's a graphical representation of the interquartile range, called the Boxplot. Let's recall what describe() gives us on the ratings column:

movies_df['rating'].describe()
Learn Data Science with

Out:

count 1000.000000mean 6.723200std 0.945429min 1.90000025% 6.20000050% 6.80000075% 7.400000max 9.000000Name: rating, dtype: float64
Learn Data Science with

Using a Boxplot we can visualize this data:

movies_df['rating'].plot(kind="box");
Learn Data Science with

RESULT:

Python Pandas Tutorial: A Complete Introduction for Beginners (91)

Python Pandas Tutorial: A Complete Introduction for Beginners (92)

By combining categorical and continuous data, we can create a Boxplot of revenue that is grouped by the Rating Category we created above:

movies_df.boxplot(column='revenue_millions', by='rating_category');
Learn Data Science with

RESULT:

Python Pandas Tutorial: A Complete Introduction for Beginners (94)

That's the general idea of plotting with pandas. There's too many plots to mention, so definitely take a look at the plot() docs here for more information on what it can do.

Wrapping up

Exploring, cleaning, transforming, and visualization data with pandas in Python is an essential skill in data science. Just cleaning wrangling data is 80% of your job as a Data Scientist. After a few projects and some practice, you should be very comfortable with most of the basics.

To keep improving, view the extensive tutorials offered by the official pandas docs, follow along with a few Kaggle kernels, and keep working on your own projects!

Resources

Applied Data Science with Python — Coursera

Covers an intro to Python, Visualization, Machine Learning, Text Mining, and Social Network Analysis in Python. Also provides many challenging quizzes and assignments to further enhance your learning.

Complete SQL Bootcamp — Udemy

An excellent course for learning SQL. The instructor explains everything from beginner to advanced SQL queries and techniques, and provides many exercises to help you learn.

Python Pandas Tutorial: A Complete Introduction for Beginners (2024)

FAQs

Is Python pandas easy to learn? ›

pandas is one of the most popular Python data science packages and is the standard tool for working with tabular data in Python. Luckily, a complete beginner can learn and start programming in pandas within a couple of weeks.

How many days it will take to learn pandas in Python? ›

If you already know Python, you will need about two weeks to learn Pandas. Without a background in Python, you'll need one to two months to learn Pandas. This will give you time to understand the basics of Python before applying your knowledge to Python data science libraries such as Pandas.

What is pandas in Python for beginners? ›

What is Pandas? Pandas is a Python library used for working with data sets. It has functions for analyzing, cleaning, exploring, and manipulating data. The name "Pandas" has a reference to both "Panel Data", and "Python Data Analysis" and was created by Wes McKinney in 2008.

How do you get the first 5 in pandas? ›

The head() method returns a specified number of rows, string from the top. The head() method returns the first 5 rows if a number is not specified. Note: The column names will also be returned, in addition to the specified rows.

Can I learn Python in 3 days? ›

On average, it can take anywhere from five to 10 weeks to learn the basics of Python programming, including object-oriented programming, basic Python syntax, data types, loops, variables, and functions.

Is Pandas harder than SQL? ›

In Pandas, you can incrementally construct queries as you go along; in SQL, you cannot. In Pandas, operating on and naming intermediate results is easy; in SQL it is harder. In Pandas, it is easy to get a quick sense of the data; in SQL it is much harder. Pandas has native support for visualization; SQL does not.

Should I learn NumPy or pandas first? ›

Answer: In our opinion, one should learn NumPy first and then Pandas as Pandas is built on top of NumPy and therefore learning NumPy before Pandas could prove to be advantageous.

Can I learn Python at 40 years old? ›

Age is not a barrier; it's a mindset. Python, a versatile and beginner-friendly programming language, is a prime choice for those looking to enter the tech realm, regardless of age.

How to master Python in 1 week? ›

Here are some tips to help you make the new concepts you are learning as a beginner programmer really stick:
  1. Code Everyday : Consistency is very important when you are learning a new language. ...
  2. Go Interactive : Whether you are learning about basic Python data structures (strings, lists, dictionaries, etc.)
Jan 9, 2023

How do I start Python pandas? ›

If you have Python and PIP already installed on a system, then installation of Pandas is very easy. If this command fails, then use a python distribution that already has Pandas installed like, Anaconda, Spyder etc.

Why is Pandas Python so popular? ›

Pandas is prized for providing highly optimized performance when back-end source code is written in C or Python. The name 'Pandas' comes from the econometrics term 'panel data' describing data sets that include observations over multiple time periods.

What is difference between Python and Python pandas? ›

Python is a general-purpose programming language used in different fields like web development, machine learning, and so on. Pandas is a Python library used mainly for data manipulation and analysis.

How many rows are in a DataFrame? ›

The simplest and most straightforward way to get the row count of a Pandas DataFrame is to use the built-in len() function, which returns the length of an object. Since a DataFrame is a collection of rows, len() function returns the number of rows in the DataFrame. Example: import pandas as pd df = pd.

What is the first method in Pandas? ›

Pandas DataFrame first() Method

The first() method returns the first n rows, based on the specified value. The index have to be dates for this method to work as expected.

How do you get the first entry in Pandas? ›

To get the first row of each group in a Pandas DataFrame, we can use the groupby() method followed by the first() method. The groupby() method groups the data based on a specific column or set of columns, and the first() method returns the first row of each group.

Are Pandas difficult to learn? ›

Pandas is easy to use because it's intuitive and mimics Excel in some ways. It lets you manipulate data through simple commands. But it's also tricky because it has lots of functions and ways to do things. As you dive deeper, you'll find more complex operations that require understanding its nuances.

Should I learn NumPy or Pandas first? ›

Answer: In our opinion, one should learn NumPy first and then Pandas as Pandas is built on top of NumPy and therefore learning NumPy before Pandas could prove to be advantageous.

Is Pandas easier than Excel? ›

Most of the tasks you can do with Pandas are easy to automate, reducing the amount of tedious and repetitive tasks you need to perform daily. This automating process includes repairing data holes and eliminating duplicates. Pandas is also faster than Excel, and you will notice when we need to deal with large data sets.

Is Pandas easier than NumPy? ›

Pandas has a lot more options for handling missing data, but NumPy has better performance on large datasets. Pandas uses Python objects internally, making it easier to work with than NumPy (which uses C arrays). As it turns out, the Pandas and NumPy libraries are similar in many ways and can be used interchangeably.

References

Top Articles
Latest Posts
Article information

Author: Neely Ledner

Last Updated:

Views: 5759

Rating: 4.1 / 5 (42 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: Neely Ledner

Birthday: 1998-06-09

Address: 443 Barrows Terrace, New Jodyberg, CO 57462-5329

Phone: +2433516856029

Job: Central Legal Facilitator

Hobby: Backpacking, Jogging, Magic, Driving, Macrame, Embroidery, Foraging

Introduction: My name is Neely Ledner, I am a bright, determined, beautiful, adventurous, adventurous, spotless, calm person who loves writing and wants to share my knowledge and understanding with you.