Hello and welcome to another data analysis with Python and Pandas tutorial. In this tutorial, we're going to change up the dataset and play with minimum wage data now.
You can find this dataset here: Kaggle Minimum Wage by State. This dataset goes from 1968 to 2017, giving the minimum wage (lowest amount of money that employers can pay workers by the hour), by state.
Description of the data:
Year: Year of data
State: State/Territory of data
Table_Data: The scraped, unclean data from the US Department of Labor.
Footnote: The footnote associated with Table_Data, provided by the US Department of Labor.
High.Value: As there were some values in Table_Data that had multiple values (usually associated with footnotes), this is the higher of the two values in the table. It could be useful for viewing the proposed minimum wage, because in most cases, the higher value meant that all persons protected under minimum wage laws eventually had minimum wage set at that value.
Low.Value: This is the same as High.Value, but has the lower of the two values. This could be useful for viewing the effective minimum wage at the year of setting the minimum wage, as peoples protected under such minimum wage laws made that value during that year (although, in most cases, they had a higher minimum wage after that year).
CPI.Average: This is the average Consumer Price Index associated with that year. It was used to calculate 2018-equivalent values.
High.2018: This is the 2018-equivalent dollars for High.Value.
Low.2018: This is the 2018-equivalent dollars for Low.Value.
Once you have downloaded the data, let's begin working with it.
import pandas as pd
# UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 0: invalid start byte
df = pd.read_csv("datasets/Minimum Wage Data.csv", encoding="latin")
Right away, we've got some encoding issues. Looks like the user saved the formatting funky-like. Because the data was grabbed from the internet, it would have made more sense to leave it in UTF-8, but, for whatever reason, that wasn't the case, and I initially hit an encoding error on loading it in. I tried latin encoding next, and boom, there we go. Now, let's go ahead and just save our own version, with utf-8 encoding!
df.to_csv("datasets/minwage.csv", encoding="utf-8")
df = pd.read_csv("datasets/minwage.csv")
df.head()
Let's check out a new functionality with pandas, called group by. We can automatically create groups by unique column values. Sounds familiar? It's exactly what we did before, just with pandas instead of our own Python logic. That's one thing I really enjoy with Pandas. It's very easy to work with Pandas using your own logic, or with some built-in Pandas logic.
gb = df.groupby("State")
gb.get_group("Alabama").set_index("Year").head()
Aside from getting groups, we can also just iterate over the groups:
act_min_wage = pd.DataFrame()
for name, group in df.groupby("State"):
if act_min_wage.empty:
act_min_wage = group.set_index("Year")[["Low.2018"]].rename(columns={"Low.2018":name})
else:
act_min_wage = act_min_wage.join(group.set_index("Year")[["Low.2018"]].rename(columns={"Low.2018":name}))
act_min_wage.head()
Sometimes, it is interesting to just see some various stats on your data. One thing you can do very quick is run a describe
on your data to get various features right away:
act_min_wage.describe()
Another one that we can do is .corr()
or .cov()
to get correlation or covariance respectively.
act_min_wage.corr().head()
For some reason, we can see that Alabama and Tennessee at least are returning NaNs. Upon looking above at the .describe()
, or if we just printed the head, we'd see that Alabama, for example, reports all 0s. What's up there?
We can just move on, or we could inspect what's going on here. Let's just briefly inspect, shall we? To begin, we'll start with our "base" dataset, which is currently under the var name of df
.
df.head()
issue_df = df[df['Low.2018']==0]
issue_df.head()
Okay, how do we get them all? Well, we could just grab the uniques from the state column like:
issue_df['State'].unique()
Let's confirm that these are all actually problematic for us. First, let's remove the ones that we know are problematic from our correlation table:
import numpy as np
# axis 1 == columns. 0,default, is for rows
act_min_wage.replace(0, np.NaN).dropna(axis=1).corr().head()
Looks good, let's save as a var:
min_wage_corr = act_min_wage.replace(0, np.NaN).dropna(axis=1).corr()
Now let's see if any of the identified problems exist after we've dropped:
for problem in issue_df['State'].unique():
if problem in min_wage_corr.columns:
print("Missing something here....")
Alright, there's our answer then. These states all are problematic. Can we recover from this? Let's see!
grouped_issues = issue_df.groupby("State")
grouped_issues.get_group("Alabama").head(3)
Right away, we can see we're missing any Footnote
, High.Value
, Low.Value
, and the High.2018
, Low.2018
. Recall that the Table_Data
was the "raw" data that was scraped. Here, we're getting elipses for whatever reason. Probably the scraper that grabbed this data needed to interact better with the web page. Unfortunately, this is the data we have. A final check I might do is to see if literally all of the columns are zero. There are a billion ways we could do this, but let's just...check the sum for Low.2018
:
grouped_issues.get_group("Alabama")['Low.2018'].sum()
Looks like we just never get any value for Alabama. Let's see if this is true for all of the issues in our group.
for state, data in grouped_issues:
if data['Low.2018'].sum() != 0.0:
print("Some data found for", state)
Looks like we wont be recovering from this, without bringing in another dataset, or maybe scraping better. Hey, I think it could be basic enough to fill in this missing data if we scraped, and it might be useful for the tutorial. Let's see. This dataset was scraped from: the Department of Labor...but, upon checking, nope. Those ...
are just plain there. I don't see how we're going to overcome that! The show will have to go on without those states! At least we were able to find out why, by using Pandas.
In the next tutorial, we'll get into some visualization and more into Pandas