0845 643 64 63

Python

Creating a quadratic solver in Python

In this blog post, I will be showing you how to create a python function that will solve any quadratic equation.

Firstly, what is a quadratic equation?

A quadratic is an algebraic equation that can be arranged as ax2 + bx + c = 0 where a, b, c are known and a ≠ 0. If a = 0 then the equation becomes linear as there isn’t a x2 term.

Secondly, how to solve this equation? There are different ways to solves quadratic equations including:

  • By inspection
  • Completing the square
  • Quadratic formula

We will be using the quadratic formula which is:

Where a, b & c are the known coefficients when the equation is arranged into ax2 + bx + c = 0

We use the Quadratic formula because it can be used in any situation and there is no nuance to it like the other methods.

The first part of the code is to define a function that will accept 3 parameters: a, b & c

To use the quadratic formula, we will use the square root function from the imported math module.

Before plugging the parameters into the formula, we can work out how many real roots we will get by using the discriminant. The discriminant is b2 – 4ac.

If b2 – 4ac > 0 then there are 2 real roots.

If b2 – 4ac = 0 then there is 1 real root.

If b2 – 4ac < 0 then there are no real roots. (There are complex roots using imaginary numbers, but we won’t go into that here, maybe in a part 2)

Let’s add calculating the discriminant and an IF statement depending on the result to the function.

Next, lets plug a, b & c into the quadratic formula.

When the discriminant = 0 we only need to find the value of root1 as it’s equal to root2. Knowing the discriminant = 0, we could simplify the equation to

In the code I have chosen to leave the full formula in.

That’s our code complete, finally let’s test the function with the 3 different discriminant scenarios.

The function works as intended, the code is attached as a txt file.

Pandas; Why Use It And How To Do So!

Introduction

Hi and welcome to what will be my first frog blog! I’m Lewis Prince, a new addition to the Purple Frog team who has come on board as a Machine Learning Developer. My skill set resides mainly in Data Science and Statistics, and using Python and R to apply these. Therefore my blogs will be primarily on hints and tips on performing Data Science and Statistics through the medium of Python (and possibly R).

I thought I would start my blog with what I feel is a very important library for anyone who wants to perform any data manipulation and/or analysis with large datasets in Python; Pandas! In this blog post I will firstly go over what Pandas is and why you should you be using it, as well as introducing and showing you how to use some of the most useful commands and features of the library.

For reference I will show you how to create a pandas data frame, add data to it (append), index it, get unique values, drop duplicate rows and values, filter the data frame, ascertain it’s shape and size and finally how to change data types of and within your data frame.

What is Pandas and why should you use it?

By definition Pandas is described as a open source library for Python which can be used for data analysis, cleaning, exploring and manipulation, especially when manipulating numerical tables and time series. Its main power comes from the fact that it is able to perform all the ‘major steps’ when wanting to process and analyse data:

  • Load
  • Manipulate
  • Prepare
  • Model
  • Analyse

It does these via supporting the import of various file types such as CSV, XLSX, XML, JSON and ZIP to name a few, as well as allowing you to perform data manipulations such as group by, join, merge, melt and data cleaning operations such as replacements or imputation. This combination is where the power of pandas lies; in the ease of which you can perform these operations when having your data in a Pandas data frame when compared to performing them in ‘base’ Python. I hope to show this in the later sections of this blog post where I go through functionality and commands of Pandas which I believe are worth knowing.

The cherry on top of the cake that is Pandas is the streamlined data representation that Pandas allows with its tabular output, which enables users to examine and interpret their data in a more effective manner. An example of this can be seen below through the use of a csv file containing a random dataset on children’s behaviour and whether they got on Santa’s good list:

Non panda data frame
Panda data frame

The code below which outputted the tables above shows how much easier it is to import and read files programmatically in pandas (non panda code first and then panda code second):

Import csv
with open('Naughty list.csv,'r') as file:
     reader = csv.reader(file)
     for row in reader:
          print(row)
Import pandas as pd
df = pd.read_csv (r'Naughty list.csv')
print (df)

To summarize this section, you should be using pandas as it allows you to do a lot more with less code, and does so with an output which is easy to read and would be familiar to people from many backgrounds.

Important commands and functionality

Now I’ve explained why you should be using Pandas, I’ll now go into what I think are the fundamentals in using Pandas. It should be noted that in all code examples I will have presumed that you have already imported Pandas as pd like below in whatever python environment you are using:

Import pandas as pd

How to set up a Pandas data frame (and then append)

There are three ways to do this depending on whether you are importing the data from external sources, using a data source already in your environment or whether you will be adding to the panda data frame via process (a for loop for example). The first of these methods can be completed within one line (changing the function and naming convention for the file and file type you are using):

df = pd.read_csv(r'whatever_your_file_is.csv')

This is then usable and readable using such functions as print() and .head().

If you already have pre-existing data that you wish to convert to a python data frame then there is slightly more manipulation needed as well as the need to create a python dictionary (if you’re not familiar with these then check out this quick guide https://www.w3schools.com/python/python_dictionaries.asp):

Data = {'Column1' : SourceData[:,0], 'Column2' : SourceData[:,1], 'Column3' : SourceData[:,2]}

PandaData = pd.Dataframe(Data)

In the above we have set up a dictionary where we state the column names that we will be using in the data frame and then allocating the relevant data from the pre-existing data you have to these column headers. This is then used with the pandas data frame command to create the panda data frame.

Finally, if you want to set up a blank pandas data frame you can do that using the example below:

PandaData = pd.DataFrame(columns =[' Column1', ' Column2', ' Column3',' Column4'])

This command will set up an empty data frame with the column headings you specify; of which you can then append any data to row by row or in bulk. I will explain how to do this in the next section.

How to append to a Pandas data frame

So you’ve got your Pandas data frame set up, but you want to add data to it. The process is the same whether you want to add just one row or many (as long as all the columns you are adding have the same amount of rows, and you have a row for each column) . Once again we will use a dictionary to collate the data and then append to the data frame like below:

New_Row = {'Column1':Data,'Column2':Data,'Column3':Data}

PandaData = PandaDate.append(New_Row,ignore_index = True

You should note setting the ‘ignore_index’ parameter to ‘True’ in the above code.  This ensures that you have a continuous index in your data frame, if this was turned off then when appending data you could have a non continuous index as the new rows would carry over the index they had from its source location. There may be some situations where you may want to do this though, for example if you wanted to keep a record of where in the source data your data came from.

How to index a Pandas data frame

To bring back whole columns from a Pandas data frame you can do so in three separate ways, the first two are very similar and the third is how you would index individual cells and rows in the more recognizable bracket indexing. The first two methods are very similar and use the column names you have set up within your Pandas data frame:

PandaData.Column1

PandaData['Column1'}

The above would bring back all data within the specified column. The third option is pandas version of bracket indexing which has to be performed using the .iloc command (integer based location), it can be used as follows:

PandaData.iloc[0,1]

The above will return you the value located in the first row of the second column of the data frame, which highlights two important points of how the function works. The first point being that the first number in the square brackets denotes the row location and the second number the column location, and the second point is that index numbering starts from 0. So row 1, 2 and 3 would be denoted as 0, 1 and 2 for example. Further uses of the command can bring you all the data in a column or row:

PandaData.iloc[:,1] will bring everything in the second column

PandaData.iloc[0,:] will bring everything in the first row

You can also combine all three methods to bring out certain elements of a column:

PandaData.Column1.iloc[1]

PandaData['Column1'].iloc[1]

Both of the above will bring you the 2nd element of ‘Column1’.

Getting unique variables and dropping duplicates

.Unique() and .drop_duplicates() are two fairly useful functions for data cleaning and exploration. They do similar but distinctly different things in returning you a list or an array of unique values or rows. The .Unique() function takes a one dimensional set of values, be it a list of indexes, categorical variables or a series, and outputs a numpy array of the same type (it will not sort them though). For example:

pd.unique(list('asdasda'))

Would return:

array([‘a’,’s’,’d’])

This function should be used for getting unique values from columns/rows within your panda data frames, but can only be used on subsections of data.

If you want to get all unique rows in a data frame then you can use .drop_duplicates(). This function takes an input of a data frame and outputs another data frame with all duplicate rows removed. For this example we will need to add some data into our data frame first:

PandaData = pd.DataFrame({
    'Column1': ['England','England','England', 'Scotland', 'Ireland'],
    'Column2': ['cup','cup', 'ball', 'square', 'bowl'],
    'Column3': [4, 4, 4, 3.5, 15, 5]})

This will give the data frame as follows:

   Column1  Column2 Column3
0  England  cup     4.0
1  England  cup     4.0
2  England  ball    3.5
3  Scotland square  15.0
4  Ireland  bowl    5.0

You can then use the following code to remove duplicate rows

PandaData.drop_duplicates()
   Column1  Column2 Column3
0  England  cup     4.0
2  England  ball    3.5
3  Scotland square  15.0
4  Ireland  bowl    5.0

This will give you a list of unique whole rows, so you may still have duplicates within columns.

You can apply this to certain rows within a data frame as well. The output of this will still be a data frame, but rows will be removed to leave only the first instance of each variable within the designated column.

This is can then be dropped using the following code:

df.drop_duplicates(subset=['Column1'])

Which gives the following table:

   Column1  Column2 Column3
0  England  cup     4.0
2  Scotland square  15.0
3  Ireland  bowl    5.0

Filtering a data frame

Filtering a dataset to get what you want is somewhat of an essential skill in data exploration and analysis, and something which is simplified in the Pandas data frame. I will firstly show the simplest form of filtering which uses your basic programmatic conditioning symbols (+, -, <, >, ==, != and LIKE for example) to perform filtering. Let’s take the table we used above (without the duplicate row) and filter that to just bring back rows which involve Scotland:

PandaData[PandaData.Column1 == 'Scotland']

Note the use of ‘==’ as opposed to ‘=’, for those familiar with python you will already know the reasoning for this denoting a check of equality, as ‘=’ is used for saving variables to some form of data frame etc.

   Column1  Column2 Column3
2  Scotland square  15.0

You can add multiple conditions to this applying to different or the same columns, but each condition needs to be in circular brackets with an ‘&’ symbol between each condition if using more than one condition:

PandaData[(PandaData.Column1 == 'England') & (PandaData.Column3 < 3.9)]

This will bring back the below:

   Column1  Column2 Column3
1  England  ball    3.5

If you are filtering on the same column you can also consolidate your conditions into one condition as opposed to multiple via using the isin() function and supplying a list of conditions , which can be seen below:

PandaData[PandaData.Column1.isin(['Scotland','Ireland'])]

This gives the below output:

   Column1  Column2 Column3
2  Scotland square  15.0
3  Ireland  bowl    5.0

Getting the size of a Pandas data frame

Sometimes it is useful to know the size of a data frame you have, whether this be for validation purposes or for analytic work where your data frame changes size dynamically through your scripts and size is needed to be known elsewhere in the script. Pandas has its own set of two commands which give you the ‘size’ and ‘shape’ of your data frame. Both are used in the same way though and will be used on the same random PandaData data frame I have been using above:

PandaData.size()

OR

PandaData.shape()

The outputs of the above commands will give us ’12’ for the size command and ‘(3,4)’ for the shape command. This shows the distinction between what the two commands do. ‘Size’ will give you the total number of values within the data frame (rows * columns), while ‘shape’ will give the dimensions of the data frame (number of rows and number of columns). A further extension of the shape() command, that I personally find useful (for example in declaring limits of a range for a ‘for’ loop), is indexing its output:

PandaData.shape()[0]

This would output the number of rows in your data frame, while using a 1 instead of a 0 will give the number of columns; anything more than that will cause a subset out of range error.

Changing data types

It is sometimes useful to change the data types of either all the data in your data frame or just certain columns. An example of a situation requiring this to be done this occurred to me recently when I need to make a unique data key out of two columns containing year and week number. These two columns were ‘integers’, and so could not be concatenated as required. In these instances you can make use of the as.type() command:

PandaData = PandaData.astype({'Year':'string_', 'WeekNumber':'string_' })

Once again the input takes the form of a dictionary where you need to specify the data type you want, and you also need to save the output to a variable; most commonly you would save it to itself like I have above.

For reference, this website shows the different datatypes available quite well: https://bit.ly/3GoELAo

As I stated in the first paragraph you can also apply the function to the whole data frame like so:

PandaData = PandaData.astype('string_')

Summary

I hope that I have shown you why Pandas is a very useful library and one you should be using. Hopefully, you find the content of this blog post useful, whether that be in starting out with Pandas, or a fixing a problem as a common user. If you have any questions for me, then please put them in the comments section. My next blog post is set to come out March 17th 2022, so if you have any ideas of what you’d like to see in regard to Python and Data Science knowledge then please put them in the comments as well.

How to create the Fibonacci Sequence in Python

This blog post will go through how to create the famous Fibonacci sequence in Python.

Each number in the Fibonacci sequence is the sum of the previous two terms.

The sequence starts: 0, 1, 1, 2, 3, 5, 8, 13, … and carries on infinitely.

The Python code I used is:

The first two terms are entered into our list, then we specify the number of terms wanted in our sequence.

Next, we loop through from 2 to the number of terms we wanted appending the sum of the previous two terms.

Printing our list gives the first fifty terms of the Fibonacci sequence.

Join 2 Python lists together using nested loops

In this blog post I will show you how to join two 2D Python lists together.

The code is in the screenshot below.

Lines 1 – 2 are two lists that are going to be joined, line 3 is an empty list where the output will be appended to.

Lines 4 – 5 are two loops (one nested inside the other) which cycle through the records in both lists, line 6 checks whether the first items (index 0) in the two records from each list that are currently i & j match. If yes: the Key, Capital and Country are appended to our new list.

Lines 9 – 10 show the output record by record showing the join has worked successfully.

This code can be expanded for 3 lists, the code would have a 3rd for loop and an extra check in the if statement to find the correct record in the 3rd list to join.

Joining lists together in Python is useful when there is data in different lists and it would be beneficial if it were combined.

How to delay a Python loop

In this blog post, I will run through 3 different ways to execute a delayed Python loop. In these examples, we will aim to run the loop once every minute.
To show the delay, we will print out the current datetime using the datetime module.

1 – Sleep
The sleep function from Python’s time module pauses the Python execution by the number of seconds inputted. The example below pauses the script for 60 seconds.

The above script has a 60 second delay between the end of a loop and the start of the next. However, what if we wanted to execute the next loop 60 seconds after the start of the previous loop.
In other words, how do we start the loop at the same time every minute.

2 – Calculate Sleep
To do this we calculate how long the loop should sleep for.
We will print the datetime every time the minutes changes. At the start of the loop we lookup the number of seconds that have passed so far this minute.

The number of seconds that have passed this minute is calculated from date_time[-2:]. Subtracting this from 60 gives the length of time in seconds for which the loop should sleep for, to execute when the next minute starts.

Once the loop has slept for the required number of seconds, we lookup the datetime again and print it out.

3 – Task Scheduler
The previous two options are good for executing a loop a few times, ten in our case. If we wanted to execute a python script continuously without expiring, we could use the above examples with an infinite appending loop.
However, if one loop errors the script will stop. Therefore, we want to execute the entire Python script once a minute using an external trigger. This is where we can use Task Scheduler.

Task Scheduler can execute a python script from source but it is often easier to use a batch file. The batch file includes the location of the python application (python.exe) and the location of the python script (.py). For more detail on using Task Scheduler and batch files to run Python scripts, please see the following datatofish post – https://datatofish.com/python-script-windows-scheduler

Our batch file is:

To demonstrate Task Scheduler, I’m going to run the following Python code every minute.
This code uses Pandas to produce a blank CSV file, but the name of CSV file is the datetime the script was run.

These following screenshots show the triggers and actions used.

This produced the following CSV files, we can see that the files takes 1 – 4 seconds to create.

In summary we have seen three different ways to delay a Python loop, two using loops inside Python and one using Task Scheduler. All can be used depending on what kind of delay is best.

Creating a Python function to calculate Pi

Pi is 3.14159 to 5 decimal places.
To work out Pi, we will be using Leibniz’s formula:
X = 4 – 4/3 + 4/5 – 4/7 + 4/9 – …
This series converges to Pi, the more terms that are added to the series, the closer the value is to Pi.
For the proof on why this series converges to Pi – https://proofwiki.org/wiki/Leibniz’s_Formula_for_Pi
There are several points to note about the series:

  • It’s infinite, we need to find a way to continue adding term after term.
  • The denominator of the fraction increases by 2 every term.
  • The terms alternate between positive and negative.

Firstly, let’s create a function called pi.

To continue adding terms, let’s use a for loop. Every time the loop executes another term is added.

range(1,10) will produce the numbers 1, 2, … 9, 10.
However, before the loops starts, our variables’ initial values need to be set.

The pi series will start from 0.
n represents the numerator of our fractions which is the constant 4.
d represents the denominator of our fractions which starts as 1.

d needs to increase by 2 every loop, let’s use sum equals to do this.
Pi will also use a sum equals, a denotes our positive/negative function which we will get on to:

The only problem left is how to get a to alternate between 1 and -1.
This is where we introduce modulo.
Modulo outputs the remainder of a division and is denoted by %.

This example shows from numbers 1 to 4, modulo 2 alternates between 0 and 1.
If we multiple by 2 and minus 1 it will alternate between 1 and -1 which is what we require.

Putting everything together gives:

This isn’t close to 3.14159 at all.


However, we are only executing the loop 10 times, hence only 10 terms are being used to calculate Pi.

Increasing the number of loops to a million will change this:

There it is! A function written from scratch to calculate Pi.
To get a value even closer to Pi just increase the number of loops.

Finally, if you do wish to use Pi in python the easiest way is to use the numpy library, which has a pi constant stored.

Power BI Sentinel
The Frog Blog

Team Purple Frog specialise in designing and implementing Microsoft Data Analytics solutions, including Data Warehouses, Cubes, SQL Server, SSIS, ADF, SSAS, Power BI, MDX, DAX, Machine Learning and more.

This is a collection of thoughts, ramblings and ideas that we think would be useful to share.

Authors:

Alex Whittles
(MVP)
Jeet Kainth
Jon Fletcher
Nick Edwards
Joe Billingham
Lewis Prince
Reiss McSporran
Microsoft Gold Partner

Data Platform MVP

Power BI Sentinel
Frog Blog Out
twitter
rssicon