Penny is a simple tool to help us understand what wealth and poverty look like to an artificial intelligence built on machine learning using neural networks. The tool lets you play around with the landscape of a city, by adding and removing urban features like buildings, parks, and freeways in high-resolution satellite imagery.
How does PENNY work?
Penny is an AI built by Stamen Design and researchers at Carnegie Mellon University on top of GBDX, DigitalGlobe’s analytics platform. DigitalGlobe operates the world’s most advanced commercial imaging satellites, and GBDX allows developers to build products on top of DigitalGlobe’s high resolution imagery. Click here to find out more about how you can use this platform to develop products of your own.
Explore New York or St. Louis through the "eyes" of Penny, read more about the project below, and let us know your thoughts. With this interface, you can explore how different kinds of features make a place look wealthy or poor to an AI—and in the process poke at the black box of machine learning, see what makes it tick, where you think it makes sense to you, and where it might not. We’re hoping Penny provokes a lively conversation about how machines are increasingly being used to make sense of the world, sometimes just like we do, and sometimes quite differently, in sometimes useful, and sometimes curious ways.
1. First, we started with income data from the U.S. Census Bureau
We first brought in data on median household income levels in census tracts around the city. We then divided the census tracts into smaller areas to match satellite image tiles from DigitalGlobe. By coloring these different areas according to income level, we can build a map of family income in a city. In the map at right, green represents areas with the highest quartile of annual income (averaging $71,876 and above), red represents areas with the lowest annual income (averaging $34,176 and below) and orange and yellow represent the middle levels of income (averaging between $34,176 and $49,904, and $49,904 and $71,876, respectively).
Read more about census data used.
3. Then we gave this information to a neural network
We trained an artificial intelligence to predict the average household incomes in any area in the city using both of these layers of data from the census and the satellite. The AI looked for patterns in the imagery that correlate with census data. Over time, the neural network learns what patterns best predict high and low income levels. We can then ask the model what it thinks income levels are for a place, just based on looking at a satellite image. We call this AI "Penny".
What we’ve learned by playing with Penny ourselves
After Penny was let loose to make predictions about household income just based on satellite images, we began taking a close look at areas in New York City that Penny predicted had high or low incomes. It is clear that Penny learned that there are some patterns that are correlated in the imagery and the census data. Different types of objects and shapes seem to be highly correlated with different income levels. For example, lower income areas tend to have baseball diamonds, parking lots, and large similarly shaped buildings (such as housing projects). In middle income areas we see more single family homes and apartment buildings. Higher income areas tend to have greener spaces, tall shiny buildings, and single family homes with lush backyards.
You can read more about how this neural network functions here and here.
We repeatedly ran the census data and Digital Globe imagery through the neural network until it began accurately predicting income levels by looking at just satellite imagery. We're calling this trained model Penny. The images on the left are the census data and then Penny's predictions for Lower Manhattan.
You can change the imagery and explore how Penny thinks
We’ve built an interface that lets you play with what the neural network knows about place and income levels. It lets you place the Empire State Building wherever you want in New York, or put the Gateway Arch anywhere in St. Louis, and see what impact each of these changes has on what the model thinks.
Every feature that you add affects Penny’s predictions differently. The same feature will have a different effect depending on where it’s placed. Here are some examples to get you started:
How to gentrify East Harlem
East Harlem is a neighborhood in New York that the census tells us has household incomes that average in the medium-low range. Penny classifies the area as low income based on satellite imagery. You can also see that there don’t seem to be many trees, and the buildings are large housing blocks. Dropping a large number of trees into this neighborhood can have a significant impact on what Penny thinks.
Visit this area on the map.
How to make Gramercy Park a low-income neighborhood
Gramercy Park is one of New York’s oldest and greenest neighborhoods, built around a private park only available to residents. It’s also one of the wealthiest neighborhoods in town, and has been since the park was built in the 1800s. Adding urban features like parking lots, freeways, and apartment buildings can change what Penny thinks about places like Gramercy Park.
Visit this area on the map.
We’re hoping to spark a conversation about artificial intelligence, machine learning, cities, infrastructure, satellite imagery, and big data. How machines understand these things has increasingly important implications for how we understand patterns of urbanization, wealth, and the human condition.
If you make any interesting discoveries with the project, please let us know! And if you’d like to suggest a new piece of infrastructure that we haven’t already included in these interfaces, please reach out to us at firstname.lastname@example.org. Enjoy!