This week we are going to do a short exercise on working with different kinds of data. The datasets we've used in our in-class exercises have mostly been pretty clean; we're going to use multiple representations of the same underlying data to see the different ways data can be represented while still conforming to a common file format.
The data this week is drawn from a well-known paper on the effect of shark attacks on the 1912 presidential election. The result---that Woodrow Wilson may have lost his home state of New Jersey because he was blamed for shark attacks that happened while he was governor of the state---has been controversial.
import csv
import json
from pprint import pprint as pretty_print
(1.1) We're going to be working with files a lot today. For this, we'll want a convenient way to peek at the contents of a file. Write a head()
function that takes a filename and prints the first $n$ lines of the file.
def head(fname, n=5):
"""Print the first n lines of a file."""
with open(fname, "r") as fin:
for line in fin.readlines()[:n]:
print(line.strip())
A quick digression on context managers. In previous weeks, we opened a file, did something with the contents of the file, and then had to remember to close the file. We can simplify this by using a context manager and the with
keyword.
with open(filename, "r") as my_file:
do_stuff_with_my_file()
Here we open a file (called filename
) and assign it to a variable called my_file
. This is equivalent to writing:
my_file = open(filename, "r")
The with
keyword defines a new indented block - the context in which we are using the file. When we exit this block, either by returning to the previous level indentation, or by encountering an error, the context manager calls my_file.close()
for us.
(1.2) Let's begin by peeking at sharks.csv
. This is a pretty generic CSV.
head("bad_sharks/shark.csv")
Parsing nightmares. You might notice that .strip().split()
is going to not work here. Occasionally, there will be characters in the file that present a problem for parsing. For example, what if you have a CSV, but one of the values contains a comma? That would be troublesome. The common workaround for this is to wrap that variable in quotation marks. (If that field also tends to use quotation marks, then use single quotes. If it uses both, then, well, things get messy... thankfully, parsing csv files is something people have been doing with computer since the very beginning, and there is a package for that. You don't need to reinvent the wheel.
(1.3) Let's open this file into a list of dictionaries using csv.DictReader
.
# Read in the .csv file (default)
with open("bad_sharks/shark.csv", "r") as fin:
read = csv.DictReader(fin)
df = [line for line in read]
pretty_print(df[0])
Why did this work? This file conforms to the default csv format. More explicitly:
# Read in the .csv file with the default parameters made explicit
with open("bad_sharks/shark.csv", "r") as fin:
read = csv.DictReader(fin, quotechar='"', fieldnames=None, delimiter=',')
df = [line for line in read]
pretty_print(df[0])
(2.0) Now we're going to load non-default files!
(2.1) A common variant on comma-separated value files are tab-separated value files (TSVs). While spaces are ' '
, tabs are '\t'
.
head("bad_sharks/shark.tsv")
# Read in the file using the default parameters. Describe what goes wrong in a comment.
# Read in the file correctly by specifying the delimiter correctly.
with open("bad_sharks/shark.tsv", "r") as fin:
read = csv.DictReader(fin, quotechar='"', fieldnames=None, delimiter='\t')
df = [line for line in read]
pretty_print(df[0])
(2.2) Going back to a .csv, what if the field was wrapped in single quotes instead?
head("bad_sharks/shark_nj_single_quote.csv")
# Read in the file using the default parameters. Describe what goes wrong in a comment.
with open("bad_sharks/shark_nj_single_quote.csv", "r") as fin:
read = csv.DictReader(fin, quotechar="'", fieldnames=None, delimiter=',')
df = [line for line in read]
pretty_print(df[0])
# Read in the file correctly by specifying the quotechar correctly.
with open("bad_sharks/shark_nj_single_quote.csv", "r") as fin:
read = csv.DictReader(fin, quotechar="'", fieldnames=None, delimiter=',')
df = [line for line in read]
pretty_print(df[0])
(2.3) What if the commas are there, but the field isn't quoted at all?
head("bad_sharks/shark_nj_unquoted.csv")
# Read in the file using the default parameters. Describe what goes wrong in a comment.
NOTE: Please don't spend time trying to implement a workaround to read it correctly. If you run into problems like this, get better data or go complain to your local data provider. (I've provided a solution at the bottom of the file.)
(2.4) It's not uncommon for a CSV to lack a header row.
head("bad_sharks/shark_no_header.csv")
# Read in the file using the default parameters. Describe what goes wrong in a comment.
# Read in the file correctly by specifying the fieldnames parameter.
sharks_header = ['county', 'wilson1912', 'wilson1916', 'beach', 'machine', 'mayhew', 'attack', 'coastal']
with open("bad_sharks/shark_no_header.csv", "r") as fin:
read = csv.DictReader(fin, quotechar='"', fieldnames=sharks_header, delimiter=',')
df = [line for line in read]
pretty_print(df[0])
(3.0) Another common format is JSON, as we saw a few weeks ago. When dealing with multiple "rows" of data in JSON, data is often structured one of two ways:
[{row 1}, {row 2}, {row 3}...]
. Note the list-like brackets. There may be newlines separating the individual dictionaries, but the key here is the brackets.[
and would look something like
{row 1}
{row 2}
{row 3}
(3.1) The first of these cases is what the json
library is designed for.
head("bad_sharks/shark.json", n=11)
You can use the following syntax inside of a with
statement to load the file:
file_content = file_object.read()
df = json.loads(file_content)
# Read in the fully formatted .json file
with open("bad_sharks/shark.json", "r") as fin:
df = json.load(fin)
pretty_print(df[0])
(3.2) The second case is more common, but a bit more complicated.
head("bad_sharks/many_sharks.json")
You will need to manually loop through the file and call json.loads()
on each line.
# Read in the line-by-line formatted .json file
with open("bad_sharks/many_sharks.json", "r") as fin:
df = []
for line in fin.readlines():
row = json.loads(line)
df.append(row)
pretty_print(df[0])
(4.0) Got extra time? Feel free to work on loading the data for your own project!
Hopefully you will not have to deal with data where fields should be quoted but are not. But if you do, you can perform a sort of surgery on the file, line by line, to get things properly quoted.
with open("bad_sharks/shark_nj_unquoted.csv", "r") as fin:
ll = []
# keep the header in its current form
ll.append(fin.readline().strip())
# for each line...
for line in fin.readlines():
# split on the delimiter
line = line.strip().split(',')
# merge the values that are being inappropriately split and quote-escape them
line[0] = '"{},{}"'.format(line[0], line[1])
# then remove the now-redundant value
del line[1]
# then put it all back together again
line = ','.join(line)
ll.append(line)
# now we have a list of strings that can be fed into DictReader just like the original file
read = csv.DictReader(ll, quotechar='"', fieldnames=None, delimiter=',')
df = [line for line in read]
pretty_print(df[0])
Don't worry about this for now. Soon we'll learn about Pandas, and you might be interested in converting all this into a Pandas workflow.
import pandas as pd
All the code you've used so far will work fine with Pandas, because you can turn a list of dictionaries into a Pandas DataFrame really easily.
with open("bad_sharks/shark.csv", "r") as fin:
read = csv.DictReader(fin)
list_of_dicts = [line for line in read]
df = pd.DataFrame(list_of_dicts)
df
But if you want to use Pandas's read_csv
or read_json
functions directly, the equivalent function calls are as follows:
df = pd.read_csv("bad_sharks/shark.csv")
df = pd.read_csv("bad_sharks/shark.tsv", sep='\t')
df = pd.read_csv("bad_sharks/shark_nj_single_quote.csv", quotechar="'")
df = pd.read_csv("bad_sharks/shark_no_header.csv", header=None, names=['county', 'wilson1912', 'wilson1916', 'beach', 'machine', 'mayhew', 'attack', 'coastal'])
df = pd.read_json("bad_sharks/shark.json")
df = pd.read_json("bad_sharks/many_sharks.json", lines=True)
Note that pd.read_csv()
is going to have the same problem with improperly unquoted fields as csv.DictReader()
.
df = pd.read_csv("bad_sharks/shark_nj_unquoted.csv")
df