Hello and welcome to part 5 of the data analysis with Python and Pandas series. In this tutorial, we're going to build off the lasdt and experiment with combining datasets to see if we can find more relationships.
We've been working with the Kaggle Minimum Wage by State dataset, and now I am curious to bring in the Unemployment by County dataset. Sometimes, it's argued that increasing minimum wage makes it so that employing people is more costly, and, as a result, employment may drop.
import pandas as pd
unemp_county = pd.read_csv("datasets/unemployment-by-county-us/output.csv")
unemp_county.head()
Now, we'd want to map the minimum wage by state to this. In many states, there are varying minimum wages by city. We'll just have to accept that this isn't going to be perfect. Let's now load in our minimum wage data:
df = pd.read_csv("datasets/minwage.csv")
act_min_wage = pd.DataFrame()
for name, group in df.groupby("State"):
if act_min_wage.empty:
act_min_wage = group.set_index("Year")[["Low.2018"]].rename(columns={"Low.2018":name})
else:
act_min_wage = act_min_wage.join(group.set_index("Year")[["Low.2018"]].rename(columns={"Low.2018":name}))
act_min_wage.head()
Oh right, missing data. Let's throw this out from our analysis:
import numpy as np
act_min_wage = act_min_wage.replace(0, np.NaN).dropna(axis=1)
act_min_wage.head()
So now...what do we do? This seems like it might be a bit more complex than the other things we've done.
Well, our very end goal is to see if there's any relationship between unemployment and the minimum wage. To do this, we'd like to be able to just call .corr
or .cov
on some dataframe. Seems to me like it'd be most convenient then to do this on our unemp_county
dataframe. We'd really just like to add a new column, called minimum wage, then populate that column based on that ROW's state!
Logically, we're going to need to go, row by row, check the state, and set the minimum wage column to that state's value in the other dataframe. It's often going to be the case that you have some custom task like this. In this case, we're mapping some values from one dataframe to another, but maybe another time it wont even be a dataframe to another, it could be some sensor value, some sort of custom user input, or even something that will require even further calculations on. Who knows, but Pandas is extremely flexible and we can map functions to columns, based on row values. Let's see how!
First, let's just create a function that would handle this:
def get_min_wage(year, state):
try:
return act_min_wage.loc[year][state]
except:
return np.NaN
Then for example we could do:
get_min_wage(2012, "Colorado")
Now, we map!
%%time
# time will give us the total time to perform some cell's operation.
unemp_county['min_wage'] = list(map(get_min_wage, unemp_county['Year'], unemp_county['State']))
We can use this method to map just about any function with as many parameters as we want to a column. This method will basically always work, but wont necessarily be the most efficient. Often, we can use .map
or .apply
insted to a column, or some other built-in methods, but the above is always an option.
unemp_county.head()
unemp_county.tail()
unemp_county[['Rate','min_wage']].corr()
unemp_county[['Rate','min_wage']].cov()
Interesting. It looks like there's a slightly positive relationship (correlation) between the unemployment rate and minimum wage, but also a pretty strong covariance, signaling to us that these two things do tend to vary together. It just looks like, while they definitely vary together, the actual impact of one on the other isn't very substantial. Plus, we'd have to ask next which comes first. The increased unemployment, or the minimum wage increases.
Finally, I'd like to look at election data by county and see if there's a relationship between voting, minimum wage, and unemployment. To do this, I will pull from 2016 US presidential vote by county.
pres16 = pd.read_csv("datasets/pres16results.csv")
pres16.head(15)
This data starts with the entire US aggregate data, but then breaks down by state and county, as well as candidate. Let's include the top 10 candidates. To grab their names:
top_candidates = pres16.head(10)['cand'].values
print(top_candidates)
#county_2015 = unemp_county[ (unemp_county['Year']==2015 and unemp_county["Month"]=="February") ]
county_2015 = unemp_county[ (unemp_county['Year']==2015) & (unemp_county["Month"]=="February")]
county_2015.head()
Now, for county_2015
, we'd like to convert the State
to the all-caps abbreviation that our pres16
is using. We can do that using our abbreviations that we used before:
state_abbv = pd.read_csv("datasets/state_abbv.csv", index_col=0)
state_abbv.head()
state_abbv_dict = state_abbv.to_dict()['Postal Code']
county_2015['State'] = county_2015['State'].map(state_abbv_dict)
county_2015.tail()
Well that was magical, wasn't it?!
In the case of singe-parmeter functions, we can just use a .map
. Or...as you just saw here, if you want to map a key to a value using a dict, you can do the same thing, and just say you want to map the dictionary. Cool, huh?
Now let's map the county's candidate percentages to this. To do this, we have quite a few columns, and really, everything just needs to match up by county and state.
print(len(county_2015))
print(len(pres16))
Since pres16
is longer, we'll map that to county_15
, where there are matches. Instead of a map, however, we'll combine with a join
. To do this, let's index both of these. They are indexed by state
AND county
. So, we'll name these both the same, and then index as such.
pres16.rename(columns={"county": "County", "st": "State"}, inplace=True)
pres16.head()
for df in [county_2015, pres16]:
df.set_index(["County", "State"], inplace=True)
pres16 = pres16[pres16['cand']=="Donald Trump"]
pres16 = pres16[['pct']]
pres16.dropna(inplace=True)
pres16.head(2)
county_2015.head(2)
all_together = county_2015.merge(pres16, on=["County", "State"])
all_together.dropna(inplace=True)
all_together.drop("Year", axis=1, inplace=True)
all_together.head()
all_together.corr()
all_together.cov()
Curiously, min_wage appears to have a negative correlation with the pct vote for Trump, so as minimum wage rises, people are less likely to vote for Trump, for example. That makes sense since higher minimum wages are typically a Democratic agenda, though it is fairly curious that, while correlated, they do not attempt to vary together. Interesting.
Alright, I think that, at this point, we've covered quite a bit about Pandas. There will always be more and more to show, but, for that, I would suggest you just read through the docs and begin to just try doing things yourself.
The next thing that I would like to show is using Pandas as your pre-processing software for data for machine learning, which is what we're going to be doing in the next tutorial!