Introduction

Hi and welcome to what will be my first frog blog! I’m Lewis Prince, a new addition to the Purple Frog team who has come on board as a Machine Learning Developer. My skill set resides mainly in Data Science and Statistics, and using Python and R to apply these. Therefore my blogs will be primarily on hints and tips on performing Data Science and Statistics through the medium of Python (and possibly R).

I thought I would start my blog with what I feel is a very important library for anyone who wants to perform any data manipulation and/or analysis with large datasets in Python; Pandas! In this blog post I will firstly go over what Pandas is and why you should you be using it, as well as introducing and showing you how to use some of the most useful commands and features of the library.

For reference I will show you how to create a pandas data frame, add data to it (append), index it, get unique values, drop duplicate rows and values, filter the data frame, ascertain it’s shape and size and finally how to change data types of and within your data frame.

What is Pandas and why should you use it?

By definition Pandas is described as a open source library for Python which can be used for data analysis, cleaning, exploring and manipulation, especially when manipulating numerical tables and time series. Its main power comes from the fact that it is able to perform all the ‘major steps’ when wanting to process and analyse data:

  • Load
  • Manipulate
  • Prepare
  • Model
  • Analyse

It does these via supporting the import of various file types such as CSV, XLSX, XML, JSON and ZIP to name a few, as well as allowing you to perform data manipulations such as group by, join, merge, melt and data cleaning operations such as replacements or imputation. This combination is where the power of pandas lies; in the ease of which you can perform these operations when having your data in a Pandas data frame when compared to performing them in ‘base’ Python. I hope to show this in the later sections of this blog post where I go through functionality and commands of Pandas which I believe are worth knowing.

The cherry on top of the cake that is Pandas is the streamlined data representation that Pandas allows with its tabular output, which enables users to examine and interpret their data in a more effective manner. An example of this can be seen below through the use of a csv file containing a random dataset on children’s behaviour and whether they got on Santa’s good list:

Non panda data frame
Panda data frame

The code below which outputted the tables above shows how much easier it is to import and read files programmatically in pandas (non panda code first and then panda code second):

Import csv
with open('Naughty list.csv,'r') as file:
     reader = csv.reader(file)
     for row in reader:
          print(row)
Import pandas as pd
df = pd.read_csv (r'Naughty list.csv')
print (df)

To summarize this section, you should be using pandas as it allows you to do a lot more with less code, and does so with an output which is easy to read and would be familiar to people from many backgrounds.

Important commands and functionality

Now I’ve explained why you should be using Pandas, I’ll now go into what I think are the fundamentals in using Pandas. It should be noted that in all code examples I will have presumed that you have already imported Pandas as pd like below in whatever python environment you are using:

Import pandas as pd

How to set up a Pandas data frame (and then append)

There are three ways to do this depending on whether you are importing the data from external sources, using a data source already in your environment or whether you will be adding to the panda data frame via process (a for loop for example). The first of these methods can be completed within one line (changing the function and naming convention for the file and file type you are using):

df = pd.read_csv(r'whatever_your_file_is.csv')

This is then usable and readable using such functions as print() and .head().

If you already have pre-existing data that you wish to convert to a python data frame then there is slightly more manipulation needed as well as the need to create a python dictionary (if you’re not familiar with these then check out this quick guide https://www.w3schools.com/python/python_dictionaries.asp):

Data = {'Column1' : SourceData[:,0], 'Column2' : SourceData[:,1], 'Column3' : SourceData[:,2]}

PandaData = pd.Dataframe(Data)

In the above we have set up a dictionary where we state the column names that we will be using in the data frame and then allocating the relevant data from the pre-existing data you have to these column headers. This is then used with the pandas data frame command to create the panda data frame.

Finally, if you want to set up a blank pandas data frame you can do that using the example below:

PandaData = pd.DataFrame(columns =[' Column1', ' Column2', ' Column3',' Column4'])

This command will set up an empty data frame with the column headings you specify; of which you can then append any data to row by row or in bulk. I will explain how to do this in the next section.

How to append to a Pandas data frame

So you’ve got your Pandas data frame set up, but you want to add data to it. The process is the same whether you want to add just one row or many (as long as all the columns you are adding have the same amount of rows, and you have a row for each column) . Once again we will use a dictionary to collate the data and then append to the data frame like below:

New_Row = {'Column1':Data,'Column2':Data,'Column3':Data}

PandaData = PandaDate.append(New_Row,ignore_index = True

You should note setting the ‘ignore_index’ parameter to ‘True’ in the above code.  This ensures that you have a continuous index in your data frame, if this was turned off then when appending data you could have a non continuous index as the new rows would carry over the index they had from its source location. There may be some situations where you may want to do this though, for example if you wanted to keep a record of where in the source data your data came from.

How to index a Pandas data frame

To bring back whole columns from a Pandas data frame you can do so in three separate ways, the first two are very similar and the third is how you would index individual cells and rows in the more recognizable bracket indexing. The first two methods are very similar and use the column names you have set up within your Pandas data frame:

PandaData.Column1

PandaData['Column1'}

The above would bring back all data within the specified column. The third option is pandas version of bracket indexing which has to be performed using the .iloc command (integer based location), it can be used as follows:

PandaData.iloc[0,1]

The above will return you the value located in the first row of the second column of the data frame, which highlights two important points of how the function works. The first point being that the first number in the square brackets denotes the row location and the second number the column location, and the second point is that index numbering starts from 0. So row 1, 2 and 3 would be denoted as 0, 1 and 2 for example. Further uses of the command can bring you all the data in a column or row:

PandaData.iloc[:,1] will bring everything in the second column

PandaData.iloc[0,:] will bring everything in the first row

You can also combine all three methods to bring out certain elements of a column:

PandaData.Column1.iloc[1]

PandaData['Column1'].iloc[1]

Both of the above will bring you the 2nd element of ‘Column1’.

Getting unique variables and dropping duplicates

.Unique() and .drop_duplicates() are two fairly useful functions for data cleaning and exploration. They do similar but distinctly different things in returning you a list or an array of unique values or rows. The .Unique() function takes a one dimensional set of values, be it a list of indexes, categorical variables or a series, and outputs a numpy array of the same type (it will not sort them though). For example:

pd.unique(list('asdasda'))

Would return:

array([‘a’,’s’,’d’])

This function should be used for getting unique values from columns/rows within your panda data frames, but can only be used on subsections of data.

If you want to get all unique rows in a data frame then you can use .drop_duplicates(). This function takes an input of a data frame and outputs another data frame with all duplicate rows removed. For this example we will need to add some data into our data frame first:

PandaData = pd.DataFrame({
    'Column1': ['England','England','England', 'Scotland', 'Ireland'],
    'Column2': ['cup','cup', 'ball', 'square', 'bowl'],
    'Column3': [4, 4, 4, 3.5, 15, 5]})

This will give the data frame as follows:

   Column1  Column2 Column3
0  England  cup     4.0
1  England  cup     4.0
2  England  ball    3.5
3  Scotland square  15.0
4  Ireland  bowl    5.0

You can then use the following code to remove duplicate rows

PandaData.drop_duplicates()
   Column1  Column2 Column3
0  England  cup     4.0
2  England  ball    3.5
3  Scotland square  15.0
4  Ireland  bowl    5.0

This will give you a list of unique whole rows, so you may still have duplicates within columns.

You can apply this to certain rows within a data frame as well. The output of this will still be a data frame, but rows will be removed to leave only the first instance of each variable within the designated column.

This is can then be dropped using the following code:

df.drop_duplicates(subset=['Column1'])

Which gives the following table:

   Column1  Column2 Column3
0  England  cup     4.0
2  Scotland square  15.0
3  Ireland  bowl    5.0

Filtering a data frame

Filtering a dataset to get what you want is somewhat of an essential skill in data exploration and analysis, and something which is simplified in the Pandas data frame. I will firstly show the simplest form of filtering which uses your basic programmatic conditioning symbols (+, -, <, >, ==, != and LIKE for example) to perform filtering. Let’s take the table we used above (without the duplicate row) and filter that to just bring back rows which involve Scotland:

PandaData[PandaData.Column1 == 'Scotland']

Note the use of ‘==’ as opposed to ‘=’, for those familiar with python you will already know the reasoning for this denoting a check of equality, as ‘=’ is used for saving variables to some form of data frame etc.

   Column1  Column2 Column3
2  Scotland square  15.0

You can add multiple conditions to this applying to different or the same columns, but each condition needs to be in circular brackets with an ‘&’ symbol between each condition if using more than one condition:

PandaData[(PandaData.Column1 == 'England') & (PandaData.Column3 < 3.9)]

This will bring back the below:

   Column1  Column2 Column3
1  England  ball    3.5

If you are filtering on the same column you can also consolidate your conditions into one condition as opposed to multiple via using the isin() function and supplying a list of conditions , which can be seen below:

PandaData[PandaData.Column1.isin(['Scotland','Ireland'])]

This gives the below output:

   Column1  Column2 Column3
2  Scotland square  15.0
3  Ireland  bowl    5.0

Getting the size of a Pandas data frame

Sometimes it is useful to know the size of a data frame you have, whether this be for validation purposes or for analytic work where your data frame changes size dynamically through your scripts and size is needed to be known elsewhere in the script. Pandas has its own set of two commands which give you the ‘size’ and ‘shape’ of your data frame. Both are used in the same way though and will be used on the same random PandaData data frame I have been using above:

PandaData.size()

OR

PandaData.shape()

The outputs of the above commands will give us ’12’ for the size command and ‘(3,4)’ for the shape command. This shows the distinction between what the two commands do. ‘Size’ will give you the total number of values within the data frame (rows * columns), while ‘shape’ will give the dimensions of the data frame (number of rows and number of columns). A further extension of the shape() command, that I personally find useful (for example in declaring limits of a range for a ‘for’ loop), is indexing its output:

PandaData.shape()[0]

This would output the number of rows in your data frame, while using a 1 instead of a 0 will give the number of columns; anything more than that will cause a subset out of range error.

Changing data types

It is sometimes useful to change the data types of either all the data in your data frame or just certain columns. An example of a situation requiring this to be done this occurred to me recently when I need to make a unique data key out of two columns containing year and week number. These two columns were ‘integers’, and so could not be concatenated as required. In these instances you can make use of the as.type() command:

PandaData = PandaData.astype({'Year':'string_', 'WeekNumber':'string_' })

Once again the input takes the form of a dictionary where you need to specify the data type you want, and you also need to save the output to a variable; most commonly you would save it to itself like I have above.

For reference, this website shows the different datatypes available quite well: https://bit.ly/3GoELAo

As I stated in the first paragraph you can also apply the function to the whole data frame like so:

PandaData = PandaData.astype('string_')

Summary

I hope that I have shown you why Pandas is a very useful library and one you should be using. Hopefully, you find the content of this blog post useful, whether that be in starting out with Pandas, or a fixing a problem as a common user. If you have any questions for me, then please put them in the comments section. My next blog post is set to come out March 17th 2022, so if you have any ideas of what you’d like to see in regard to Python and Data Science knowledge then please put them in the comments as well.

Tags: , , , ,