DendroNN, or the Resurrection of the Tree Classification Project

TreeID Dataset

Collecting and processing the tree data was a big learning experience. That’s the encouraging, optimistic way of saying that I made a ton of mistakes that I will never, ever repeat because I cringe at how obvious they feel in retrospect. This is still under active development and has attracted some outside interest, so you can follow along at the GitHub repo. So, what are we looking at here?

How was the data collected?

The beginning of this project was a collection of nearly 5,000 photos around my own neighborhood in Cleveland at the end of 2020. Not being an arborist, or even someone who leaves the house much, I hadn’t anticipated how difficult it would be to learn how to identify trees from just bark at the time. Because it was only of personal interest at the time, the project was put on ice.

At the end of 2022, I learned of a couple of helpful projects. The first and primary source of my data, the City of Pittsburgh paid for a tree inventory a few years ago. The species of tree, its GPS-tagged location, and other relevant data can be accessed as part of its Burghs Eye View program. It’s worth noting that Burghs Eye View isn’t just about trees, but is an admirable civic data resource in general. The second, and one which I unfortunately didn’t have time to use, is Falling Fruit, which has a bit of a different aim.

Using the Burghs Eye View map, I took a trip to Pittsburgh and systematically collected several thousand photos of tree bark using a normal smart phone camera. The procedure was simple:

  1. Start taking regularly spaced photos from the roots, less than a meter away from the tree bark, and track upward.
  2. When the phone it at arm’s reach, angle it upward to get one more shot of the canopy.
  3. Step to another angle of the tree and repeat, usually capturing about six angles of an adult tree.

Mistake 1 – Images too Large

Problem

The photos taken in Pittsburgh had a resolution of 3000×4000. An extremely common preprocessing technique in deep learning is scaling images down to, e.g. 224×224 or 384×384. Jeremy Howard in the Fast.ai course even plays with this, and developed a technique called progressive upscaling; images are scaled to 224×224, a model is trained on them, and then the model is trained on images that were scaled to a resolution more like 384×384.

I spent a lot of time trying to make this work, to the point where I started using cloud compute services to handle much larger images, to no avail. Ready to give up, I scoured my notebooks and noticed that some of the trees that the model was confusing more than the others looked very similar when scaled down to that size. It occurred to me that a lot of important fine details of bark was probably getting lost in that kind of compression. Okay, but so what?

Solution

Cutting the images up. I suspect this solution is somewhat specific to problems like this one. It doensn’t seem like it would be useful for classification purposes if you were to cut photos of fruit or houses into many much smaller patches. But tree bark is basically just a repeating texture. Even before realizing the next mistake, I noticed improvements by scaling down to 500×500 images, then finally a more drastic improvement by going down to 250×250.

In some ways this gave me a lot more data. If you follow the match, a 3000×4000 image becomes at most 192 usable 250×250 patches. I at first thought it was a little suspicious, but I looked around and doing it this way isn’t without precedent. There is a Kaggle competition for histopathological cancer detection where this technique comes up, for instance.

Mistake 2 – Too Much Extraneous Data

Problem

A non-trivial amount of this data got thrown out. At the time, I was working on the impression that the AI would need to take in as much data as it could. What I hadn’t considered was that some kinds of tree data, even some kinds of bark data, would be substantially more useful than others in classifying the trees. To borrow a data science idea, I hadn’t done a principle component analysis and wasted a lot of time. Many early training sessions were spent trying to get the model to classify trees based on images that included:

  • Roots and irregular branches
  • Soil and stray leaves
  • Tree canopy that wasn’t specific
  • Excessively blurry images
  • Tree bark that was covered in lichen or moss, damaged, diseased
  • Potentially useful, but at an angle or a distance that just got in the way

The end result was that it might require an inhumanly large dataset to achieve learning in any meaningful way. Early models could somewhat distinguish photos of oaks and pines in this way, but the results were too poor to be worth reporting.

Solution

Well, I ended up with two solutions.

The first was training a binary classifier to detect usable bark. The first time this came up, I was still working with the 500×500 patches, which worked out to just shy of 200,000 images. Not exactly a manual number, but I’ve never seen an ocean that I didn’t think I could boil. I spent an afternoon sorting 30,000 images, realized I had only sorted 30,000 images, and then realized I accidentally made a halfway decent training set for a binary classifer.

That classifier sorted the remainder of those images in under an hour.

The second solution was, unfortunately, quite a bit less dramatic. It involved simply going through the original photos, picking out the ones that basically weren’t platonic bark taken at about torso level, and just repeating the process of dividing them into patches and feeding those into the usable bart sorter.

So, what does it actually look like?

Alright, we’re getting into a little of the code. First, we need to import some standard libraries. I’ll do some explaining for the uninitiated.

pandas handles data manipulation, here mostly CSV files. If you’ve done any work with neural networks before, you might have seen images being loaded into folders, one per class. I personally prefer CSV files because they make it easier to include other relevant things besides just the specific class of the image. I’ll show you an example shortly.

matplotlib is a library for annotation and displaying charts.

numpy is a library for more easily handling matrix operations.

PIL is a library for processing images. It’s used here mostly for loading purposes, and we can see that because we are loading the Image class from it.

In [1]:
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image

import os

Something that was really helpful was the use of confidence values. For every patch that the usable bark sorter classified, it also returned its confidence in the classification. Looking at the results, it had a strong inclination to reject perfectly usable bark, but it also wasn’t especially confident about those decisions. We can use that to our advantange by only taking bark that it was very confident about.

In [2]:
def confidence_threshold(df, thresh=0.9, below=True):
    if below==True:
        return df[df['confidence'] < thresh]
    else:
        return df[df['confidence'] > thresh]

Loading the Data

What these files will ultimately look like is still in flux. Today, I’ll be walking you through some older ones that still have basically what we need right now. First, we load the partitions that were accepted or rejected by the usable bark sorter. We also do a little cleanup of the DataFrame.

In [49]:
reject_file = "../torso_reject.csv"
accept_file = "../torso_accept.csv"

reject_df = pd.read_csv(reject_file, index_col=0)
accept_df = pd.read_csv(accept_file, index_col=0)

accept_df
Out[49]:
path confidence
0 dataset0/pittsburgh_torso-250×250/0001/2023010… 0.776984
1 dataset0/pittsburgh_torso-250×250/0001/2023010… 0.999994
2 dataset0/pittsburgh_torso-250×250/0001/2023010… 0.890233
3 dataset0/pittsburgh_torso-250×250/0001/2023010… 0.503523
4 dataset0/pittsburgh_torso-250×250/0001/2023010… 0.999959
108057 dataset0/pittsburgh_torso-250×250/0079/2023010… 0.998933
108058 dataset0/pittsburgh_torso-250×250/0079/2023010… 0.999988
108059 dataset0/pittsburgh_torso-250×250/0079/2023010… 0.998984
108060 dataset0/pittsburgh_torso-250×250/0079/2023010… 1.000000
108061 dataset0/pittsburgh_torso-250×250/0079/2023010… 0.999999

108062 rows × 2 columns

I accidentally goofed up when generating the files by accidentally leaving a redundant “.jpg” in a script. The files have since been renamed, but because we’re using older CSVs, we need to make a quick fix here. The below is just a quick helper function that pandas can use to map to each path in the DataFrame.

In [50]:
def fix_path(path):
    path_parts = path.split('/')
    
    # Change 'dataset0' to 'dataset'
    #fn[0] = "dataset"
    
    # Remove redundant '.jpg'
    fn = path_parts[-1]
    fn = fn.split('_')    
    fn[1] = fn[1].split('.')[0]
    fn = '_'.join(fn)
    path_parts[-1] = fn
    path_parts[0] = "../dataset"
    path_parts = '/'.join(path_parts)
    return path_parts
In [51]:
accept_df['path'] = accept_df['path'].map(lambda x: fix_path(x))
reject_df['path'] = reject_df['path'].map(lambda x: fix_path(x))

And just to make sure it actually worked okay:

In [52]:
accept_df['path'].iloc[0]
Out[52]:
'../dataset/pittsburgh_torso-250x250/0001/20230106_113732(0)_7_1.jpg'

Alright, so what’s this?

In [53]:
accept_df
Out[53]:
path confidence
0 ../dataset/pittsburgh_torso-250×250/0001/20230… 0.776984
1 ../dataset/pittsburgh_torso-250×250/0001/20230… 0.999994
2 ../dataset/pittsburgh_torso-250×250/0001/20230… 0.890233
3 ../dataset/pittsburgh_torso-250×250/0001/20230… 0.503523
4 ../dataset/pittsburgh_torso-250×250/0001/20230… 0.999959
108057 ../dataset/pittsburgh_torso-250×250/0079/20230… 0.998933
108058 ../dataset/pittsburgh_torso-250×250/0079/20230… 0.999988
108059 ../dataset/pittsburgh_torso-250×250/0079/20230… 0.998984
108060 ../dataset/pittsburgh_torso-250×250/0079/20230… 1.000000
108061 ../dataset/pittsburgh_torso-250×250/0079/20230… 0.999999

108062 rows × 2 columns

In [54]:
 reject_df
Out[54]:
path confidence
0 ../dataset/pittsburgh_torso-250×250/0001/20230… 1.000000
1 ../dataset/pittsburgh_torso-250×250/0001/20230… 0.999809
2 ../dataset/pittsburgh_torso-250×250/0001/20230… 0.999998
3 ../dataset/pittsburgh_torso-250×250/0001/20230… 0.997332
4 ../dataset/pittsburgh_torso-250×250/0001/20230… 0.999979
93149 ../dataset/pittsburgh_torso-250×250/0079/20230… 0.651545
93150 ../dataset/pittsburgh_torso-250×250/0079/20230… 0.999999
93151 ../dataset/pittsburgh_torso-250×250/0079/20230… 0.982236
93152 ../dataset/pittsburgh_torso-250×250/0079/20230… 0.991354
93153 ../dataset/pittsburgh_torso-250×250/0079/20230… 0.897549

93154 rows × 2 columns

Around 108K patches were accepted, and around 93K were rejected. Looking at the confidence column, we see that there are a few that the model is somewhat uncertain about some of these. Let’s take a look at that.

First, we need a couple more helper functions. This one opens an image, converts it to RGB, resizes it to something presentable, and converts it to a numpy array.

In [22]:
def image_reshape(path):
    image = Image.open(path).convert("RGB")
    image = image.resize((224, 224))
    image = np.asarray(image)
    return image

Next, it will be helpful to be able to see a few of these patches at once. This next function will get patches in batches of 16 and arranges them in a 4×4 grid with matplotlib.

In [23]:
def get_sample(path_list):
    print("Generating new sample")
    new_sample = np.random.choice(path_list, 16, replace=False)
    
    samples = []
    paths = []
    for image in new_sample:
        samples.append(image_reshape(image))
        paths.append(image)
    return samples, paths

get_sample() takes a list of paths, so let’s extract those from the DataFrame.

In [24]:
reject_paths = reject_df['path'].values.tolist()
accept_paths = accept_df['path'].values.tolist()
In [26]:
samples, paths = get_sample(reject_paths)
plt.imshow(samples[1])
Generating new sample
Out[26]:
<matplotlib.image.AxesImage at 0x7fb632dda470>

Okay, let’s see a whole grid of rejects!

In [27]:
def show_grid(sample):
    rows = 4
    cols = 4
    img_count = 0
    fix, axes = plt.subplots(nrows=rows, ncols=cols, figsize=(15,15))
    
    for i in range(rows):
        for j in range(cols):
            if img_count < len(sample):
                axes[i, j].imshow(sample[img_count])
                img_count += 1
In [28]:
def new_random_grid(path_list):
    sample, paths = get_sample(path_list)
    show_grid(sample)
    return sample, paths
In [30]:
sample, paths = new_random_grid(reject_paths)
Generating new sample

These are samples of the whole of the rejects list. I encourage you to run this cell a bunch of times if you’re following along at home. You should see a goodly number of things that aren’t remotely bark: bits of streets, signs, grass, dirt, and other stuff like that. You’ll also see a number of patches that are bark, but aren’t very good for training. Some bark is blurry, out of focus, or a part of the tree that I later learned wasn’t actually useful.

A fun thing to try is to see the stuff that the model rejected, but wasn’t very confident about.

In [35]:
reject_df_below_85 = confidence_threshold(reject_df, thresh=0.85, below=True)
reject_df_below_85_paths = reject_df_below_85['path'].values.tolist()
sample, paths = new_random_grid(reject_df_below_85_paths)
Generating new sample

Running this just once or twice usually reveals images that are a bit more… On the edge of acceptability. It’s not always clear why something got rejected. I haven’t ended up doing this yet, but something in the works is examining these low-confidence rejects in either training or, more likely, testing of the model. Mercifully, there is enough data that was unambiguously accepted that doing so hasn’t been a pressing need.

Speaking of, we should take a look at what got accepted, too.

In [39]:
sample, paths = new_random_grid(accept_paths)
Generating new sample

As above, these are all of the accepted bark patches, not just those that have a high confidence. Let’s see what happens when we look at the low-confidence accepted patches.

In [45]:
accept_df_below_65 = confidence_threshold(accept_df, thresh=0.65, below=True)
accept_df_below_65_paths = accept_df_below_65['path'].values.tolist()
sample, paths = new_random_grid(accept_df_below_65_paths)
Generating new sample

Here we see that bias that I was talking about before. Note that when we were looking at the rejected bark, we were looking at patches that were divided by a threshold of 0.85 and were already seeing a lot of patches that could easily be accepted. Here, we are looking at a confidence threshold of 0.65 and are still not seeing many that would definitely be rejected.

The cause of the bias is unknown. I made it a special point of training the usable bark sorter on a roughly even split of acceptable and unacceptable bark. Because this was just a secondary tool for the real project, I haven’t had time to deeply investigate why this might be. I suspect there is some deep information theoretical reason for why this happened, perhaps one that will be painfully obvious to any high schooler once the field is more mature. The important thing now is it’s a quirk of the model that I caught early enough to use.

And what do these patches represent?

Having seen the images we are working we, it might be a good idea to look at what species we’re actually working with.

In [55]:
specimen_df = pd.read_csv("../specimen_list.csv", index_col=0)
specimen_df
Out[55]:
common_name latin_name family
id
1 norway_maple Acer_platanoides maple
2 norway_maple Acer_platanoides maple
3 norway_maple Acer_platanoides maple
4 red_maple Acer_rubrum maple
5 freeman_maple Acer_x_freemanii maple
75 northern_red_oak Quercus_rubra beech
76 northern_red_oak Quercus_rubra beech
77 white_oak Quercus_alba beech
78 bur_oak Quercus_macrocarpa beech
79 swamp_white_oak Quercus_bicolor beech

79 rows × 3 columns

Okay, I took photos of 79 different trees. It was actually 81, but the GPS signal on the last two was too spotty to match them on the map, and they had to be excluded. How can we break this down?

In [76]:
family_names = specimen_df.family.value_counts()
family_names = family_names.to_dict()
f_names = list(family_names.keys())
f_values = list(family_names.values())

header_font = {'family': 'serif', 'color': 'black', 'size': 20}
axis_font = {'family': 'serif', 'color': 'black', 'size': 15}
plt.rcParams['figure.figsize'] = [10, 5]

plt.bar(range(len(family_names)), f_values, tick_label=f_names)
plt.title("Breakdown of Specimens Collected in Pittsburgh, by Family",
         fontdict=header_font)
plt.xlabel("Family", fontdict=axis_font)
plt.ylabel("Number of Specimens", fontdict=axis_font)
plt.show()

When I started training, it made sense to start training focused on the family level. A family will inherently have at least as many images to work with as a species, and usually many more, and I had the assumption that variation would be smaller within the family. Interestingly enough, at least within this dataset, the difference in the quality of the model at the family and species levels has so far been negligible.

In [80]:
common_names = specimen_df.common_name.value_counts()
common_names = common_names.to_dict()
c_names = list(common_names.keys())
c_values = list(common_names.values())
In [83]:
plt.bar(range(len(common_names)), c_values, tick_label=c_names)
plt.title("Breakdown of Specimens Collected in Pittsburgh, by Species",
         fontdict=header_font)
plt.xlabel("Species (common names)", fontdict=axis_font)
plt.ylabel("Number of Specimens", fontdict=axis_font)
plt.xticks(rotation=45, ha='right')
plt.show()

The Model and Dataset Code

Full code can be found on the GitHub repo, but here are some important parts of the training code. First, because we are using a custom dataset, we need to make a class that will tell the dataloaders what to do. Some of this might need a little bit of explaining.

We have to import some things to make this part of the notebook work. BarkDataset inherits from the Dataset class. To initialize it, we only have to bring a given DataFrame, e.g. accept_df into the class. I’ve shown before that CSVs will let us work with a lot of other data that goes into the support and interpretation of the dataset, but BarkDataset itself only needs two things: the column of all the paths of the images themselves, and the column that defines their labels.

You might be a little confused about the line self.labels = df["factor"].values. The full code converts either the species-level or family-level specimens into a numerical class. For example:

"eastern_white_pine": 0

The label is the 0. When making predictions, we will convert back from this label for clarity to the user, but that isn’t how the model sees it.

After loading the DataFrame, we also define a set of transforms in self.train_transforms. At minimum, this is where we scale images down to 224×224 and normalize them. If you’re wondering, the values for normalization are standard in the field from ImageNet statistics.

In addition to these standard changes, transforms also has a wide variety of transforms that facilitate data augmentation. We can use data augmentation to give us more information from a base dataset; we just need to keep in mind to introduce the kinds of variability that would actually occur in the collection of more data.

You’ll notice two other methods in this class: __len__ and __getitem__. The former just returns the number of items in the DataFrame. The latter is where a single image is actually loaded using Image from the PIL library, and the label is matched from that image’s location in the DataFrame. The transforms are then applied to the image, and we get both the loaded image and its label returned in a dictionary.

In [86]:
import torch
from torch.utils.data import Dataset, DataLoader
from tqdm import tqdm

class BarkDataset(Dataset):
    def __init__(self, df):
        self.df = df
        self.fns = df["path"].values
        self.labels = df["factor"].values

        self.train_transforms = transforms.Compose([
            transforms.ToTensor(),
            transforms.Resize((224, 224)),
            transforms.RandomHorizontalFlip(),
            transforms.RandomRotation(60),
            transforms.ColorJitter(brightness=0.5, contrast=0.5, saturation=0.5, hue=0.5),
            transforms.Normalize(mean=[0.485, 0.456, 0.406],
                                std=[0.229, 0.224, 0.225]),
            ])

    def __len__(self):
        return len(self.df)

    def __getitem__(self, idx):
        row = self.df.iloc[idx]
        image = Image.open(row.path)
        label = self.labels[idx]

        image = self.train_transform(image)

        return {
            "image": image,
            "label": torch.tensor(label, dtype=torch.long)
        }

Next, we’ll look at the model. The model can be expanded in a lot of ways with a class of its own, but at this stage of the project’s development, we are just starting with the pretrained weights and unchanged architecture as provided by timm.

In [ ]:
model = timm.create_model("deit3_base_patch16_224", pretrained=True, num_classes=N_CLASSES)
model.to(device)

criterion = nn.CrossEntropy
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)

scheduler = optim.lr_scheduler.CosineAnnealingWarmRestarts(optimizer, T_0=100, T_mult=1,
                                                        eta_min=0.00001, last_epoch=-1)
model, history = training(model, optimizer, criterion, scheduler, device=device, num_epochs=100)
score = testing(test_df, model)

Now, I’ve gone through a lot of iterations of the various components here. Just off the top of my head:

  • I first started using an EfficientNet architecture, but got curious to see how one based on vision transformers would compare, and it wasn’t really a contest.
  • I originally used an Adam optimizer. Training with vanilla SGD proved slower, but also gave more consistent results.
  • I’ve settled on cosine annealing as a learning rate scheduler, but also have intermittent success with CyclicLR and MultiCyclicLR.
  • Most recently, I’ve been wondering if cross-entropy is the right loss metric. I quickly replaced accuracy with ROC because there are multiple, imblanaced classes in the full dataset. I suspect this also has implications for training, but have so far not found a loss metric that works better.

Changing the parameters here is still an active area of my research.

Results

This is a confusion matrix of the results so far. It plots the predicted label against the actual label, and makes it a little easier to see where things are getting mixed up. Ideally, there would only be nonzero values along the diagonal.

In [152]:
#!pip install seaborn
import confusion_stuff
import seaborn as sn

matrix = confusion_stuff.matrix

ind = confusion_stuff.individuals
ind = {k: v for k, v in sorted(ind.items(), key=lambda item: item[1])}
ind_keys = ind.keys()
ind_vals = ind.values()

df_cm = pd.DataFrame(confusion_stuff.matrix)
sn.heatmap(df_cm, cmap="crest")
Out[152]:
<AxesSubplot:>

The average ROC for this model across all 21 species is about 0.80. For reference, a 1.0 would be a perfect score, and 0.5 would be random guessing. The graphis is somewhat muddled because there are so many classes, so you can see the scores for individual classes below.

In [153]:
for i, j in zip(ind_keys, ind_vals):
    print(f"{i}: {j}")
common_pear: 0.508
pin_oak: 0.542
red_maple: 0.611
colorado_spruce: 0.69
kentucky_coffeetree: 0.736
chestnut_oak: 0.741
japanese_zelkova: 0.784
northern_red_oak: 0.823
swamp_white_oak: 0.838
scotch_pine: 0.847
sugar_maple: 0.848
callery_pear: 0.857
norway_maple: 0.864
austrian_pine: 0.873
thornless_honeylocust: 0.875
bur_oak: 0.888
eastern_white_pine: 0.888
ginkgo: 0.893
white_oak: 0.912
amur_corktree: 0.921
freeman_maple: 0.922
In [154]:
ind_keys = sorted(list(ind_keys))
ind_vals = sorted(list(ind_vals))
ind_keys.reverse()
ind_vals.reverse()
In [155]:
plt.bar(range(len(ind_keys)), ind_vals, tick_label=ind_keys)
plt.title("ROC of Individual Species in Best Model",
         fontdict=header_font)
plt.xlabel("Species", fontdict=axis_font)
plt.ylabel("ROC", fontdict=axis_font)
plt.xticks(rotation=45, ha='right')
plt.show()

Having plotted these, it’s not hard to see where the model is weak.

Next Steps

I’m still trying new things and learning a lot from this project. Some things on the horizon:

  • Trying one of the larger DeiT models.
  • Ensemble method: training a number of classifiers with a smaller number of classes and having them vote using their confidence scores.
  • Ensemble method: break one large test image into patches, have models vote on each of the patches, and use the majority, the highest confidence, or some other metric as the prediction.
  • Gathering more data, especially expanding to include more species.
  • Weird idea: information theoretical analysis of the tree bark.

Early days!

CartograCLE

Haha, get it? ‘Cause it’s making maps about Cleveland, and…

Anyway, in my post about the maternal mortality case study, I alluded to another project in R that involved USGS data and interactive maps. This was actually inspired by the Great Lakes Data Watershed. I took some time out to learn how to reproduce most of the functionality of one of their dashboards, and I learned a lot in the process. You can follow along with the GitHub repo here, or check out the demo here.

What are we looking at here?

This program calls the USGS REST API to get near real-time data from water monitoring stations around Cuyahoga county, Ohio. It formats that data to get the flow and stage of rivers around the county, and displays that information on relevant points on the map. Unfortunately, it’s only hosted on GitHub Pages, so it isn’t making regular calls to the USGS. This could be remedied by using real hosting that I am not going to pay for personally. For the curious, I made use of OpenStreetMaps to provide the map data, and Leaflet for the interactive map elements.

What’s next?

I have been trying to get into GIS applications for years, and I never before had the skill with R to explore this environment with that kind of tooling. Because this is the first time I’ve used a REST API to get data of this type, I’m thinking about other scientific datasets that have frequent updates. Something that I have been paying particular attention to is Landsat data and its possible applications to precision agriculture. Early Days!

Maternal Mortality Case Study

I recently completed the Google Data Analytics Professional Certificate program available through Coursera. It was a big adventure covering a large swath of material I had never taken the time to look at, chief among them probably getting the hang of R. It’s a programming language I have poked at before, but never really examined in a directed way. Coming largely from a background of languages like Python and JavaScript, R is really, genuinely different.

Anyway, the course calls for the completion of a case study. Instead of using the bike dataset, I opted to look at something that has been hanging over my head for quite some time now: maternal mortality rates in the US. My initial analysis is currently being hosted on GitHub Pages and can be found here. The GitHub repo itself can be found here. As of this writing, the findings are preliminary; this project will be subject to a lot of revision, but I’m happy with this initial draft.

Why, though?

The issue of women’s health has never been spectacular in the US, but it seems to be coming to a head in recent months. The recent overturning of Roe v. Wade, declining fertility rates for sociological and economic reasons, and concerns around plastics having pronounced physiological effects are on many people’s minds. Additionally, I know people for whom this is a personal concern.

What did you find?

It’s still a preliminary analysis subject to expansion and review, but it looks like one big culprit in the drastic increase in maternal mortality rates in the US is increasing maternal age. I don’t mean women in their 40s and 50s who are having unsafe geriatric pregnancies; the average age for women dying of maternal mortality conditions is still entirely under 36. In fact, of particular interest was that changes in maternal mortality rates has cleaved around 40 in recent years, with cases being on the decline in women in those much older cohorts.

Everything after that was beyond the scope of my analysis. There could be any number of reasons why maternal ages are increasing. I posited that a general lack of trust in the economy following the Great Recession of 2008 could be one reason, but climate change, labor situations, and the general cultural disposition to families could also be factors.

Also, while discussing the project with a friend, we got the idea of comparing maternal mortality to mining. After using R to run some statistics from MSHA, we found out about this:

Graph comparing the fatalities per 100,000 of mothers and miners, indicating that a career in mining has been safer from this perspective since 2011

Yeah. Fatalities for people working in the coal mining industry were lower per 100,000 than those for mothers. In my mind, there’s something somber and concerning about looking at the cultural touchstone of dirty and dangerous work, comparing it to something that is quintessentially human, and being surprised at which one is more dangerous. Very generously, this could be characterized as a victory for improvements in safety standards and the continued application of engineering to the well-understood problems of mining, but given the sharp increase in maternal mortality in the US in particular, I’m personally having a hard time conceptualizing it that way.

What’s next?

Broadly, taking this course and doing this case study has given me a new lens on learning programming languages. Early in the process of learning R, my brain constantly thought of how it would solve problems with Python, tools that I am much more familiar with. R felt kludgy, cumbersome, and unnatural, right up until it didn’t. At some point, R became the natural tool for the job of data analysis, and I started solving problems in a way that I know would be difficult if I were to do the same in Python. From now on, I aspire to learn tools such that they are obviously the perfect fit for their respective jobs.

Less abstractly, I will be doing more projects with R. It allows for a certain fluidity in exploring data that I’m gaining a deep appreciation for, and I’m very curious to see how involved the ecosystem is. For instance, while working on the maternal mortality case study, I took the liberty of learning just a bit about geographic information systems. Within a day, I got the hang of calling the USGS API and applying its data to an interactive map, a project which I will be posting about very shortly. Early Days!