California Mincemeat Recipe

Obligatory Rambling About this Recipe

Last year, I wanted to try making mincemeat pies. I’m not entirely sure why. The only time I’d eaten these before was when my grandparents’ neighbor brought some to their Christmas parties when I was younger. Those were made with jarred filling. I don’t remember what they tasted like or even if I liked them. But whatever. Let’s blame the Great British Baking Show.

I looked at a bunch of recipes and realized I didn’t have some of the ingredients they called for, so like every home cook researching on the internet, I started substituting things I had on hand, which was fruit that grows well in a northern California backyard and a random collection of dried fruit that desperately needed to get used. This could have gone very wrong, but it was exactly the opposite. The pies were amazing. I made the recipe again from my notes and it came out just as good this year, so I though it was time to share. I did make a mistake this year and put in much more pomegranate juice than I did last year, but it came out fine. So if you have 2 cups, great. If you’ve got more because you misread the recipe and peeled way too many pomegranates, go ahead and add that in. It’s pretty forgiving. You’re gonna reduce it anyway.

What does it taste like? The filling is rich (thanks, butter!), sweet, and tangy. It’s a bit citrusy (is that a word?) and bright. It doesn’t taste like any of the individual ingredients. I hate raisins but you don’t taste them. If you really don’t like a particular dried fruit, just leave it out and add more of something else.

This recipe makes enough for many dozen muffin tin pies. Probably 6 dozen? About 6 cups of filling. Seriously, cut the recipe in half unless you have a lot of people who want to eat tiny pies. But they keep well, so maybe just make the whole thing.

Ingredients:

2-3 cups pomegranate juice

1 cup brown sugar

1.5 quarts (6 cups) of fruit:

  • 4 Fuyu persimmons, diced, skin on
  • Dried fruit, diced (raisins, craisins, apricots, prunes)

1 inch of fresh ginger finely diced or grated

1 tsp. cinnamon

1/4 tsp. nutmeg

1/4 tsp. ground cloves

Zest of 1 tangerine

Zest + juice of one lemon

1/2 stick butter

1/3 cup rum

Procedure:

  1. In a medium sauce pan, simmer the pomegranate juice with the sugar to reduce it a little while you prepare the fruit.
  2. Chop the big fruit to make even sized pieces.
  3. Add the rest of the ingredients (except the rum) to the pan and simmer until thickened, stirring as needed to keep it from burning.
  4. Cool the mixture.
  5. Stir in the rum and refrigerate until needed. It’s better if it ages a bit in the fridge. I left mine a week or so.

Making Pies?

  1. Make your favorite pie crust dough (or buy some… whatever… it really doesn’t matter). I like the double crust recipe in the New Better Homes and Gardens Cookbook (i.e. the Betty Crocker, even though it’s not by or from Better Crocker… it’s a long story) made with butter. You’ll need more than one batch to use all the filling.
  2. Roll it out pretty thin.
  3. Cut out circles big enough to fit in your muffing tins and place them in the cups. You’ll need to fiddle with them to get them to sit down inside the cups.
  4. Add the filling. DO NOT OVERFILL. If you put too much, it will leak over the edges when it starts to bubble and the pies will glue themselves to the bottom of the tin. Don’t say I didn’t warn you.
  5. Top the pie with another pie crust cutout that that is about the same size at the top of your pies. I used a star that conveniently had the same diameter as my muffin tin openings, but any shape (or none at all) will do.
  6. Bake them at 375° (?… whatever you normally bake your pies at) until they’re visibly brown and the filling is bubbly. When in doubt, go a little longer. You don’t want a soggy bottom.

Reflections on FOSS4G 2022 Firenze, Italia

I suspect this is going to be fairly random, but I’ll try to stay as organized as I can.

Conference

The main conference venue.

The Talks

This conference was the first I’ve been to in-person since FOSS4G Dar es Salaam, Tanzania, in 2018 and FOSS4G North America San Diego in 2019. It’s been four years since I’ve traveled internationally and three since I’ve had to physically navigate a conference. I’m so out of practice, but it was worth it.

I’ve missed being with my people. After the conference, I’m reminded that I don’t often fit in anywhere I’ve work because I’m always the only person who does what I do. But you all do what I do too. We’re spread out all over the globe and magic happens when we come together. There’s an alchemy is openly sharing ideas that I don’t have anywhere else in my work life. I’m an academic and academia has struggled with sharing ideas out of fear of being “scooped”. But I love that in our community, if someone wants to do what I do, we just do it together. There’s enough work for everyone to have a piece… or many pieces… just try not burn out because there’s really more work to do than any of us can take on.

The online viewing room for Andrew Middleton’s talk.

Both Steven Mather and Andrew Middleton mentioned me in their talks for my work with kite aerial photography (and being slightly crazy, shall we say?). I had no idea I was “user #1” for Open Drone Map. Yup. The first use case for Open Drone Map was kite aerial photography. It’s been even longer since I’ve had a kite in the air than since I’ve been to a conference. I need to remedy that. And also, I’m thinking that this might be a good opportunity to contribute more to Open Drone Map. Kite aerial photography is less organized than the photos you get from a drone flight. You can’t plan exactly where the kite goes, nor are all the shots completely nadir. The wind you need to lift the kite makes the camera swing and bounce a little. All of this means stitching photos is more difficult, and that’s why I sought Steven out that first time we met at FOSS4G Denver (Was it Denver? It’s been so long, I don’t really remember). Nothing else I’d tried to stitch photos worked very well. Anyway, my kites are fine, but I think I need to look into new sensors, maybe something with GPS included. Can you buy drone sensors separate from the drone? How expensive is it? Can someone fund this work?

Earlier in the conference (my brain wants to follow a research timeline, not the chronological timeline of the conference, and I want to get these thoughts down on “paper” so I’m going with it), I saw Paolo Dabove’s talk about classifying drone imagery to identify flowers for bee research. So much of what his team is doing is applicable to coastal dune vegetation (my area of interest). The scale is especially applicable – satellite imagery is just too coarse so you need to collect low-altitude imagery. I’m glad this was an academic talk because I’m looking forward to combing through all the details of their image classification in their paper.

I missed Jens Ingensand’s talk about 3D georeferencing historical photos and I really wanted to see it. I couldn’t figure out why, but now I’ve had time to sort it out – because I was in the GeoPandas talk by Joris van den Bossche followed by the MovingPandas talk by Anita Graser. You just can’t see it all, but again, I’m grateful that Jens’ talk was an academic one so I can follow up on the paper. The Pandas talks renewed my interest in getting better at Python. I mainly work in R because it’s what I’m familiar with and it’s got really good libraries for research-related geospatial workflows, but I know that Python is also very powerful and it has advantages for certain kinds of questions. I think I need to know both. And why not.

The Field Trips

I went on both the observatory and cartography museum field trips, and I’m glad I did.

The supposed birthplace of photogrammetry.

At the cartography museum, I realized that while we think we’re all modern and doing important new things, our field is rooted in ancient science. Humans have been navigating and communicating locations probably for our entire existence. What we do is fundamental to us as humans. Maybe that’s why I feel so deeply about my career.

Inside the observatory.

At the observatory, I kind of just felt like a kid. Looking at Saturn through the telescopes and seeing the rings was mind-blowing. You see artistic renderings of the planets and I kind of thought they must be fudging things a bit to make the point, but darn it, that planet has a big set of rings around it and you can see it for yourself! It just looked like a little white stamp on a black field, but what an impact on my mind. And our hosts answered all manner of questions from questions about space and the composition of the planets to mine wanting to know how that automatic telescope knows where it is and how to orient the lens.

The Board Meeting

Most people wouldn’t think an 8 hour board meeting sounds like much fun, but I was privileged to get to spend this much time with some really amazing people, hashing out how to help our community. Adam Steer wrote a lovely summary of what we discussed and steps we’ve taken to address many of the comments made at the AGM Meeting, so I’ll send you there for details.

Suggestions

This brings me to a suggestion for future FOSS4G conferences: keep the academic tracks, but don’t physically and metaphorically separate them from the rest of the conference. The language used to describe the academic track and the physical separation kept the developer crowd away, and that’s just not helpful. The academic track had some of the most complex and sophisticated use cases for FOSS4G software that I’ve ever seen. Academics are even developers – we have to make the tools we need but can’t find off the shelf. So why separate it? The only difference I see is that the academics write a paper. I see no reason to separate us for the talks. I went back and forth between the “regular” and “academic” sessions – racking up several miles in the process I might add.

The crowd at the opening session.

Fair warning, I’m about to get fairly critical. I intend these statements to not place blame, but rather inspire change and offer commentary rooted in my own experience.

I’m surprised that our community was not more mindful of protecting each other in our continuing global pandemic. If you spend any time with me in person, there’s a good chance you’ll learn that I have asthma and allergies . Both are managed fairly well, but I’ve taken medications all my life and have a team medical specialists who have helped me get to this stable place. That sounds dire, but it’s pretty normal for people in my situation. I know I am not the only one at high risk of complications from viruses in our community. Why do I mention this? Because I thought long and very hard about the risk I was taking to my health to come to this conference. 1,000 people, most of them not wearing masks, is a serious threat to my life right now. But, if I didn’t come, I would miss out on the exchange of ideas and the career-advancing learning that takes place at these conferences. As a community, do we really want to only include the healthy or the reckless? Simple mask wearing would prevent a lot of transmission of easily spread viruses like covid-19, influenza, and colds, all of which are a risk for people with chronic illnesses and immune system issues. And the red wrist bands? I know it was well-intentioned, but a hug or a handshake isn’t really the threat (also, we should always get permission before touching another person, regardless; it’s just common courtesy). The threat is a room filled with virus particles floating in the air. That’s the current science. The focus for protecting people should have been on masking to reduce airborne virus particles and ventilation to reduce the concentrations. I wore a KN95 or N95 mask indoors for the entire conference. I ate outside where the air was moving. But, there are times you have to remove a mask and masks are not 100% effective. I love this community, and the masking situation made me feel less welcome. That makes me sad. Who couldn’t come at all? Why should we choose between our careers and our lives? Perhaps you think I’m being dramatic, but I know what it’s like to struggle to breath and I’d very much like to prevent other people from experiencing this. Masking and other science-based virus mitigation measures allows a community to be more open; it allows access for more people; this is the heart of who we are as a community, and in this particular respect, we failed. We can do better.

What did we get right in this respect? Having an online option was great! It wasn’t the same experience for sure for those attending entirely online, but I think the conference team found a skilled video provider who did an absolutely amazing job with the digital aspects of the conference. Also, having opportunities to be outside for social events made a big difference. If we had to stay inside for these, I probably wouldn’t have been able to attend.

The Sights & The City

My first impression of the heart of Florence was overwhelming. It’s crowded, both in terms of people and architecture. I feel like I got a better impression of it in the morning and late at night when there were fewer people around. The modern city’s messaging seems to focus on the art history aspect of the city, and especially the art of times long gone by. The science history of the Renaissance is much harder to find here, which is a shame because art and science were inseparable then and should be now.

Lily and lotus water feature at the Boboli Gardens

I found some science opportunities that I really enjoyed. Before the conference, I went to the Botanical Garden and the Museum of Geology and Paleontology. Both could use more visitors. They were a welcome respite from the crowds in the rest of the city. I missed out on the Museum of Anthropology and Enthnology. I had the 3 museum combo ticket, but I just couldn’t fit it all in with the days I had. After the conference, I hit up the Boboli Gardens, the Galileo Museum, the Academy, Basilica di San Marco, and the Santa Croce Chapple, and walked outside the Orsanmichele Chapel, the Duomo, and the Uffizi. These were a good mix of art and science. You really can’t visit everything.

Tobias, hanging out with some Archangels and carrying a fish.

At the Academy Museum, I was pretty tickled to find a painting by Domenico di Michelino that included both the archangel Michael (Michele in Italian) and Tobias, who was carrying a fish. The painting prompted me to finally dig into the biblical story of Tobias and now I understand where the family confusion over our name being associated with a fish comes from. Also, now I know that my name is based on two people who vanquished demons and the mother of the Jesus, so… you know… no pressure to do big things or anything!

Tobias’ fish.

Challah Conchas

I’ll spare everyone the traditional recipe blog post story that rambles along and often involves major TMI, and just say that a friend posted on a social media that a bakery in LA makes conchas with Challah as the dough for the bun. That sounded AMAZING! But I’m not in LA. I immediately tried to find recipes, but struck out. So I made my own. It’s my family’s go-to Challah with a conchas topping that I adapted from another recipe. Is it worth it? It’s so very worth the time and effort.

Photos of the Process

Divide the dough.
Divide each piece of dough into two and make snakes.
Twist the snakes together.
Curl your twist up like a snail shell and tuck the end under. This looks so much more complicated than it is.
So shiny.
Shape the topping into discs.
Score the top and bake.
The final product! I didn’t cut all the way through the topping and the cracks are a little chaotic. But if they aren’t perfect, it’s easier to eat them!

Recipe

Challah Ingredients

  • 1 1/4 tsp. instant or rapid rise yeast
  • 1/2 cup warm water
  • 1/4 cup sugar
  • 1 1/4 tsp. salt (table salt)
  • 1/3 cup vegetable oil
  • 1 egg
  • About 2 cups of flour

Challah Instructions

  1. Mix the yeast, water, and sugar in a large bowl. Let it rest 5 minutes or so, until the yeast is active and bubbling.
  2. Mix in the salt, oil, and egg.
  3. Add a cup of flour and mix vigorously to make a batter.
  4. Continue mixing in flour until a workable dough forms.
  5. Turn the dough out into a floured counter and kneed until you have a smooth dough. Add more flour if it gets sticky.
  6. Put the dough into an oiled bowl to rise until doubled in bulk, about an hour and a half.
  7. Once it’s risen, turn it out into the counter. It’s an oily dough so I usually don’t need flour to keep it from sticking at this point. Divide the dough into 16 pieces.
  8. Shape each piece. Balls work fine. I cut mine in two, formed snakes, twisted the snakes together and then coiled up the twists, tucking in the ends. Or braid them. It really doesn’t matter. Just do the same thing to all of them so they cook at the same time.
  9. Place the shaped rolls onto a pan. I use a silpat liner, but parchment would also work to make sure they don’t stick.
  10. Let the rolls rise until they’re puffy and visibly larger.
  11. While you wait, make the topping and preheat the oven to 350°.

Topping Ingredients

  • 1/4 cup butter or shortening
  • 1/4 cup powdered sugar
  • 1/4 cup granulated sugar
  • 1 1/2 tsp. vanilla extract
  • 4 drops of red liquid food coloring (optional)
  • 1/2 cup flour

Topping Instructions

  1. Mix the butter and sugars together well.
  2. Add the vanilla and food coloring and mix again.
  3. Mix in the flour.
  4. Divide the dough into 16 even portions. I rolled it into a fat snake and cut it.
  5. Shape each portion into a disc about as big as the top of the challah rolls.

Assemble & Bake

  1. Top each roll with a pink topping disc.
  2. Score the topping with a sharp knife, cutting through the disc all the way. You don’t need to cut the bread roll.
  3. Bake the conchas in a 350° oven for 20 minutes, until the bottoms just start to turn golden. Check them often. Rolls burn fast.
  4. Cool on a wire rack.

Vegetarian Dolmas

img_20190526_173758288

I moved into my house just about a year ago and since that time I’ve been looking forward to making dolmas.  You see, my backyard came with this aggressive grape vine that last year didn’t really make any decent grapes (maybe it would have if the squirrels didn’t get them first, or if the previous owners had watered ANYTHING in the yard).  At any rate, the grape leaves are young and tender and it’s time to let this vine prove it’s got some value.  I posted on social media about my dolmas and got so many questions, I decided to post my recipe… thank goodness I took notes while I was making this!  Note that this recipe is not traditional in any way nor is it trying to be.  It’s just good.

Vegetarian Dolmas

This recipe is made in sections.

Rice

2 cups brown rice

3 cups water

Put the rice and water into a rice cooker and let it do it’s thing.  You could also cook it on the stove top, but you’ll need to find those instructions elsewhere.

TVP

TVP is “texturized vegetable protein”.  In my house, we have it on hand to add protein to vegetarian dishes.  We use it instead of ground meat.  You can find it in the grocery store.  We like Bob’s Red Mill’s TVP.

1 cup TVP

1 cup vegetable broth

1 Tbs tomato paste

2 Tbs soy sauce (sounds odd, makes the TVP taste amazing)

2 tsp garlic powder

1 tsp onion powder

Place the dry TVP into a medium-sized bowl (it’s going to expand).  Mix the rest of the ingredients in a bowl or other pourable vessle (I like my 2-cup Pyrex measuring cup).  Pour the broth mixture over the TVP and let it sit until the liquid is absorbed.  Give it a stir to fluff it up.

Finish the Filling

1 Tbs tomato paste

2 tsp garlic powder

1 tsp onion powder

1 tsp chopped fresh oregano

1/2 tsp chopped fresh thyme

1 Tbs chopped fresh basil

4 oz oil-packed sun dried tomatoes with their oil

1 can of artichokes, chopped

1/2 cup of kalamata olives – measured and then chopped

3/4 cup (2 handfulls) of sweetened dried cranberries (aka “Craisens”… I use Mariani brand) – you could also use raisins or dried currants

8 oz feta

black pepper to taste, freshly ground

Mix all of the above ingredients, the rice, and TVP in the biggest bowl you’ve got.  Adjust the amounts to your taste.  At this point, it should smell good and taste good.  If not, make adjustments.  Set this aside and prepare your grape leaves.

Grape Leaves

img_20190526_151918321

I used fresh grape leaves from my vine.  Only use leaves that you know have not been sprayed with pesticides.  Pick them when they’re young-ish.  Too big and they’ll be tough.

30 grape leaves

2 Tbs salt

Lots of water

Remove the stem and the thickest veins at the base of the leaves.  Be careful not to make a giant hole in the middle of the leaf.

img_20190526_152003045

Bring a pasta pot full of water and the salt to a boil (I made it like I was making pasta).  Working in batches of 5, boil the leaves for about 2 minutes each.  They will float, so use tongs to flip the pile of leaves over a few times to make sure they all cook. Drain them in a colander to remove the excess water.

Assembly & Baking

Olive oil

2-3 cups of broth

Prepare a 9×13 inch glass pan by generously sprinkling the bottom of the pan with olive oil. (Confession: mine didn’t all fit in the 9×13 so I also had a 9×9 for the overflow… no big deal.)

Place one grape leaf flat on the cutting board with the base of the leaf (where the stem was) closest to your body.

Place about 2 tablespoons of filling in the middle of the leaf.

img_20190526_162640851

Fold the sides of the leaf over the filling.

Fold the bottom of the leaf over the filling.

Roll the pile of filling and leaf sides upwards to the top tip of the leaf.  Roll a firm but not tight packet.  It will expand a little while baking, so don’t worry if it isn’t super tight.

Place the rolled dolma in the pan seam-side down.

Repeat 29 more times, arranging the dolmas in the pan so they fit tightly together.

Once all the dolmas are in the pan, sprinkle more olive oil on top of the dolmas and fill the pan with broth up to about half way up the dolmas.  Traditional recipes cover them completely, but our rice is already cooked.

Bake at 350 degrees for about 40 minutes until the dolmas have absorbed most of the broth.

img_20190526_174214833

 

 


Loading Open Addresses Data into PostGIS

I was working on a geocoding project using my TIGER Geocoder.  The results were terrible for addresses related to university campuses, which is not unreasonable given that buildings can be in the middle of big open spaces with no access to actual roads and many universities (like my alma mater & employer, UC Davis) have made-up addresses that don’t correspond with any real-life location.  What can you do?  If you can’t send your data out to an online service for privacy reasons or you need to process a lot of data on a limited budget, you can work with Open Addresses data.  This post will cover the first step in that process, which is loading data from Open Addresses into a PostGIS table.

Open Addresses is an open dataset that provides point locations for large number of addresses around the world.  The data is downloadable in a folder structure that makes finding data as a human very easy… not so much for my computer though.  If you just need one table, you don’t need to worry.  Just import your table like you would any other.  If you need more than a couple, you’re going to want to automate the import.  This is one way to do that.  I can imagine there are more efficient ways, but this is how I did it.

If you look at the Open Address data, you’ll notice that some tables have city and state names and others do not.  This makes just appending all the tables together not helpful.  If in the future they fill in the tables completely, all this code will not be necessary.  Let’s hope!

Download Data

First, you’ll need to download the data.  I downloaded the five US files (don’t forget that one in the Share-Alike column!) from the Open Addresses data download page.  I don’t recommend doing all of the US, speaking from experience.  It’s a huge dataset so unless you need it all, pick just the states or regions you need.  Whatever you download, put the zips all into one folder, and unzipped them into one folder.

My folder structure looks like this:

  • C:
    • GIS_Data
      • OpenAddresses
        • openaddr-collected-us_midwest
          • summary
          • us
        • openaddr-collected-us_northeast
          • summary
          • us
        • openaddr-collected-us_south
          • summary
          • us
        • openaddr-collected-us_west
          • summary
          • us
        • openaddr-collected-us_west-sa
          • summary
          • us

Inside the Summary and US folders are more subfolders with states and then .csv files inside those.  We don’t need to move anything lower than the region folders.  The code will find all the files we need as long as the region folders are in the same folder.

Importing the Data into PostGIS

I’m assuming you’ve already set up a PostgreSQL/PostGIS database.

I wrote my code for the table import process in R (connecting to my database with the RPostgreSQL package) because it’s a language I am very familiar with.  You could write this in as a .bat file or in python too.  Maybe my code comments would be a decent road map for doing that if you’re happier programming in those languages.

I wrote this code specifically for the US data.  If you’re working with other countries, you may need to adjust some of the code, particularly where it parses out the city and state names from the file paths.

# Load libraries
library(RPostgreSQL)

# Function to capitalize words
# --- We'll need this when we parse the city names from the file names
.simpleCap <- function(x) {
    s <- strsplit(x, " ")[[1]]
    paste(toupper(substring(s, 1, 1)), substring(s, 2),
    sep = "", collapse = " ")
}

# Connect to the database
# --- creates a connection to the postgres database
# --- note that "con" will be used later in each connection to the database
# --- adjust the parameters like dbname, port, user, and password for your specific database
drv <- dbDriver("PostgreSQL")
con <- dbConnect(drv, dbname = "DATABASE NAME",
host = "localhost", port = 5432,
user = "USER NAME", password = 'YOUR PASSWORD')

# Directory where you stored your Open Address downloads
dr<-"D:\\GIS_Data\\OpenAddresses"

# Look through your directory for the .csv files and put them into a list
csvs<-list.files(dr, recursive=TRUE, pattern = "csv", full.names = TRUE)

# Remove files from the list that have "summary" in the file name
csvs<-csvs[-grep("summary", csvs)]

#Delete any existing tables with the same name from your database
# --- this is in case you end up running the code a couple of times
# --- while you make changes
dbGetQuery(con, 'DROP TABLE IF EXISTS oa;')

#Create the blank table to put the data from the .csv files into
# --- The columns need to match what is in your .csv files 
dbGetQuery(con, 'create table oa (
    lon numeric,
    lat numeric,
    house_number text,
    street text,
    unit text,
    city text,
    district text,
    region text,
    postcode text,
    id text,
    hash text);')

# Copy data into database 
# --- this walks through the list of tables, importing each
# --- and parsing out the city & state name from the file name
for (i in 1:length(csvs)){
    querytext<-paste("COPY oa FROM '", csvs[i], "' DELIMITER ',' CSV HEADER;", sep = "")
    dbGetQuery(con, querytext)

    #what state is it?
    state<-toupper(strsplit(csvs[i], "/")[[1]][4])
    stateupdate<-paste("UPDATE oa SET region = '", state, "' WHERE region is NULL;", sep="")
    dbGetQuery(con, stateupdate)

    #what city is it?
    city<-gsub(".csv", "", strsplit(csvs[i], "/")[[1]][5])
    city<-gsub("city_of_", "", city)
    city<-gsub("_", " ", city)
    city<-.simpleCap(city)
    cityupdate<-paste("UPDATE oa SET city = '", city, "' WHERE city is NULL;", sep="")
    dbGetQuery(con, cityupdate)
}

# Remove "statewide" from the state column 
dbGetQuery(con, "UPDATE oa SET city = NULL WHERE city like 'Statewide';")

The results should be one table in your database called oa that contains all of the Open Address points in the files you downloaded. Now you can use that table to match your list of addresses. I ended up comparing the addresses I needed to geocode with the Open Addresses data using the QGram metric that tells you how similar two strings are. Hopefully soon I’ll have a post with the code I used to find the most similar Open Addresses match to the list of addresses I needed to geocode.


Framed Carpet Cat Scratchers

I keep seeing pins on Pinterest of carpet in a frame on the wall for cats to use as a scratcher, yet none of the linked sites have instructions.  I loved the idea – it’s simple, takes up almost no space, and provides more vertical scratching opportunities – but how do you do it?  I showed the posts to my partner to get his thoughts on how we could make it happen.  He found a Martha Stewart video involving double-sided tape to hold the carpet inside a frame, which set us off on our journey.  It’s not as complicated as I thought it was going to be and you don’t really need many items and no tools.  Here’s how we did it.

You will need:

  1. carpet remnants, squares, or samples
  2. a frame
  3. Command picture frame adhesive
  4. A cat to help, and judge.

img_20180810_203525698

How you do it:

  1. Pick your carpet.  We got 8×8″ carpet samples from Home Depot.  You can buy them online for $1 each (at the time of writing).  You could use any carpet you think your cat will like, especially if you had leftovers from some other project, but the super thick, long pile carpets are hard to work with.  Go for short or medium unless your kitty is really into deep pile.
  2. Pick your frames: If you get carpet samples, get a frame that fits your sample.  The 8×8″ frames fit the Home Depot samples perfectly.  Or pick a frame you like and use the mat or the frame back as a template to cut your carpet.
  3. Open up your frame.  Take out the glass and any mat that comes with it; save them for another project.  You’ll probably also want to remove the foot/prop/kickstand that many frames have so it will sit flat on the wall.
  4. Cut your carpet to fit into the frame if needed.
  5. Put the carpet into the frame, fuzzy side sticking out.img_20180810_203730711
  6. Put the back into place and close up the pins that keep the back on.  If you’re using a big frame and carpet, you might want to glue or tape the carpet to the frame back and tape around the opening on the back.  For the smaller squares, it’s not necessary.img_20180810_205101641
  7. Finally, we need to put it on the wall.  You don’t want to use the hanger the frame came with.  If you hang it on a nail, the cat will probably knock the frame off the wall or it will swing back and forth scraping the wall.  We used Command’s velcro-style picture hangers to attach it flat to the wall.  Just follow the instructions on the package.  An added bonus is that these hangers can take some weight so hopefully my 12 pound big boy won’t pull them off the wall.

img_20180819_220928641

Now that they are done, do the kitties love them?  No.  Of course not.  They are cats and therefore contrary.  I think the scratchers might be a little too high on the wall, so we’ll try moving them lower to see if that helps.  Catnip gets some interest.


Batch Geocoding with PostGIS

So you’ve got a table of addresses that need to be matched up with their latitude/longitude coordinates… i.e. you need to geocode some addresses.  Most of the time, an online tool is a better choice than running your own geocoder if all you need is a handful of addresses geolocated.  In the case that you can’t share your addresses or you have too many to send through an online tool, batch geocoding on your own computer is a good option.  In this post I’ll share how I’ve been doing this.

First, you’ll need a geocoder.  If you’ve got a PostGIS Geocoder up an running, congratulations!  That’s no easy feat.  If you don’t, take a detour over to my post about how to make a PostGIS geocoder in less than 5 days.

Once you’ve made your geocoder, you’re probably fairly familiar with how to run queries in pgAdmin 4, but just to be on the safe side, I’m gonna give you all the details I think you’ll need.

Start pgAdmin 4.  (If you get an error, you may need to start the postgresql service – see link above for details).  Navigate in the browser panel on the left side of the window to your geocoder database and click on it to highlight it.  Open the Query Tool – Tools menu at the top, then select Query Tool.

This is the code I’ve been running (with copious code comments to help you… and me later… understand what I did).  You’ll need to make adjustments for your specific table of addresses, but after that, you can paste the whole thing into the Query Tool and hit the run button (why is it lightning bolt icon?).

--- remove any existing attempts
DROP TABLE IF EXISTS address_table;

--- Create an empty table
--- Make the columns match those of the CSV that has the data 
--- you want to geocode

CREATE TABLE address_table(
per_id_fk varchar(255), 
acity varchar(255), 
astate varchar(255),
azip varchar(255),
acounty varchar(255),
adate varchar(255),
street varchar(255),
input_address varchar(255));

--- Look at your new empty table:   
--- (I like to have visual confirmation that things worked the first time 
--- I run them... you don't need to do this when you run the whole script 
--- to batch process the address)
--- Uncomment the SELECT query below if you're doing this one piece at a time:
--- SELECT * FROM address_table;

--- import data from a CSV - the columns in the CSV should be in the list above
COPY address_table from 'C:\gisdata\TableOfAddresses.csv' WITH DELIMITER ',' NULL 'NA' CSV HEADER;

--- Look at the data you added:
--- SELECT * FROM address_table;

--- Add the columns we'll need for geocoding
ALTER TABLE address_table 
ADD lon numeric,  
ADD lat numeric,  
ADD geomout geometry, -- a point geometry in NAD 83 long lat. 
ADD new_address varchar(255),  
ADD rating integer;

--- Look at your new empty columns:
--- SELECT * FROM address_table;

--- This function loops through the table, one row at a time:
CREATE OR REPLACE FUNCTION mygeocoder()
RETURNS void
AS $$
BEGIN 
   FOR i IN 1..(SELECT(count(per_id_fk)) from address_table) LOOP 
      UPDATE address_table 
         SET  (rating, new_address, lon, lat, geomout) = (COALESCE(g.rating,-1),pprint_addy(g.addy), ST_X(g.geomout)::numeric(8,5), ST_Y(g.geomout)::numeric(8,5), (g.geomout) ) 
         FROM (SELECT per_id_fk, input_address FROM address_table WHERE rating IS NULL ORDER BY per_id_fk LIMIT 1) 
         As a 
      LEFT JOIN LATERAL geocode(a.input_address,1) As g ON true 
      WHERE a.per_id_fk = address_table.per_id_fk; 
   END LOOP; 
   RETURN;
END;
$$ LANGUAGE plpgsql;

--- Run your geocoder to geocode your address table:
SELECT mygeocoder();

--- Look at your table with geocoded addresses:
SELECT * FROM address_table;

As you can see in the code comments, this function geocodes the table one row at a time.  According to posts I’ve read, you don’t want to feed your whole table into the geocoder at once.  It makes the computer hold a lot of data in memory and will slow things down.  You can do a few lines at a time, but it makes the code for the loop easier if I didn’t have to figure out how many times to loop (… how many rows do I have… divide by 3… how to know when to stop…).

This is going to take a long time if you have many addresses.  For testing purposes, I highly recommend making a subset of your data – maybe 100 or so addresses – to work on while you adapt your code.  You don’t want to wait hours just to find out something was a little off.

In formulating this post and code, I found these resources to be very helpful and have adapted most of my code from them:

 


Charley Harper Quilt Finished

For the last year and a half, I’ve been working on a needle-turn applique quilt with blocks based on the work of Charley Harper.  It was completely hand-pieced and hand-quilted.  I also made my own patterns for each block.  Note that Charley Harper Studio sells quilt and needlepoint patterns on their site, but I did not use these.  I started planning and designing in June 2016 and finished it in the middle of November 2017.  I thought it would take a year to finish, but I’d never done anything like this before so I’m not surprised that my estimate was off.  What matters though is that I loved making this and that I finished.

IMG_4142_cropped

My finished quilt

Since this was a major undertaking, I feel compelled to discuss the process I went through and offer some tips if you decide to try this too.

Designing the Blocks

I discussed my process for creating the patterns in previous blog posts (over-all design and individual blocks), so check those out for details.  Mostly I would encourage you to keep your designs simple.  Small pieces of fabric are very difficult to work with and at some point, it makes sense to complete some details with embroidery rather than applique.

Choosing fabric was mostly straight-forward since the colors tend to be solid in the reference work.  Sometimes though, a piece had a textured or patterned background, like the calico cat on a leaf background.  This was not easy to replicate, so I chose a fabric that was a mottled green and quilted leaf shapes onto the background to have a similar feel to the original.

Cat_PatternPieces

One page of one of my patterns. Writing the color, number of times to cut out a shape, and the approximate location or item is helpful (nose, eye, leg, etc.).

RaccoonEmbroidery

Embroidery is best for small fields or lines. I ditched the hoop after this block and just held the fabric in my hands. Do what makes you happy.

Cutting & Piecing

A lot of my learning from high school geometry came in handy here.  Finding the center of a block is easily achieved by folding the fabric in half.  A square (a ruler with a right angle) is useful for making sure corners are square and measurements are made at 90° angles.

Do yourself a huge favor and cut your background fabric larger than you need it.  There will be shrinkage as you applique and you’ll be much happier to have extra than not enough.

HummingBirdPiecing

Referring to the original image helps with placement an over-all feel.

In general, be patient.  Take the time to plan the order you need to add each piece.  Sometimes I needed to applique smaller pieces to larger ones before they got sewn down to the background fabric.  Did I have to rip things out occasionally?  Oh yes!  I even mangled up a few pieces of fabric so bad I had to cut new ones.  Not a big deal.  Be patient with yourself.

CatPiecing

Laying out pieces can help you know what to sew down next. It’s ok if you need to undo something though.

PloverTracing

Don’t be afraid to cut apart your pattern to help with alignment of pieces or embroidery lines.

Piecing the Blocks Together

After all that applique, your blocks will rarely still be square.  Yikes!  What matters is that you get a straight seam when you put them together.  Don’t worry if you’re having to get creative to make it work.  In the end, no one’s gonna know.

Quilting

By the time I got to quilting, I realized that I should follow the warnings online to make sure your batting and backing are bigger than the top to account for shrinkage as you quilt.  Such a good idea!

Pinning also is helpful when you first start to keep everything together.  However!  Before you start to quilt a section, take out the pins and make sure everything is lying flat before you re-pin.  Also, start your quilting in the center and work out to prevent uneven backing fabric.

I’ll tell you a secret: my backing isn’t quite flat.  I had some moments of serious disappointment over this, but I realized that NO ONE CARES what the back of the quilt looks like.  It’s also hand-made so there will be some hiccups.  It doesn’t ruin anything.  Obviously you don’t want huge puckers, but a couple of wrinkles or loose spots is not the end of the world.  Mistakes on the back are less important than issues on the front.

OwlQuilting

Designing the background quilting was a big challenge.  I had images of the original art to inform my block decisions, but the needs of the quilting sometimes didn’t fit.  For example, I needed to have fairly evenly spaced quilting and not leave large areas un-quilted.  But quilting added new lines that weren’t in the originals.  In the burrowing owl block, I decided to add in parallel lines that corresponded with the lines on the owls’ heads.  I’ve seen similar striping in other pieces from Charley Harper.  For the dolphin block, I added more swooshes to fill in un-quilted areas.  My goal was to provide stability to the block in a way that was visually similar to methods employed by the original artist.

Binding

I broke some more rules with the binding.  After I quilted around the major figures in my blocks and the framing pieces and nothing was shifting around any more, I did my binding, then I finished off the background quilting.  The batting and fabric on the edges was starting to stretch out and get in the way and it just made sense to handle the batting at that time.

Overall

Just have fun.  Try not to sweat the small stuff.  If there’s a mistake and you’re prone to worry about these things, ask yourself, “Will my friends and family notice this?  If they did would they care?” or “In 5 years am I ever going to notice this when I use my quilt?”  If not, let it go.  The things that matter are issues that will affect the over-all life of the project like skimpy seams, frayed edges, or poor quality stitching.  The rest is fine.

Break the rules.  Ignore the videos and blog posts that tell you you’re doing it wrong.  If the end result is what you want, it doesn’t matter how you got there.  I never was able to turn my applique edges under with the needle.  When I tried, I got a frayed mess.  I used my fingers with much better success.  Can you tell the difference?  Nope.  Use the videos and blog posts to get an idea of how to do things, but don’t worry if your version is a little different.

Share your progress.  I planned to post about each block here on my blog, but ended up sharing mostly on Facebook in a public album.  My friends and family enjoyed following the progress.  Yours probably will too.

Above all, just try and do what makes YOU happy.

Me&amp;Caddy

Me and my quality control agent, Cadmius, who inspected every stitch that went into this project. (There is a significant amount of cat hair inside this quilt.) His favorite blocks are the dolphin, the owls, and the possums. I can tell because he’ll only sit on those three.


Change SP Object Polygon Rendering Order in R

The Problem

I have a geoJSON file that was made by combining many (as in ~250) geoJSON files each containing a single polygon in R.  The polygons are of varying size and can overlap in space.  When I open the file in a GIS, often the smaller polygons are hidden under the larger ones, making displaying the data challenging, if not impossible.

Map showing few polygons when there should be many

I know there are more polygons than are visible in this area, so they must be hiding under the larger polygons.

Map showing larger polygons obscuring smaller polygons

With transparency set to 60% for each polygon (see the Draw Effects dialog in QGIS for the fill symbol layer), you can see that smaller polygons are hiding under larger ones.

The Goal

I would prefer that the polygons stack up so that the largest is on the bottom and the smallest is on the top.  This requires that I change the rendering order based on the area of each polygon.

The Quick Fix

QGIS has an option to control the rendering order.  Open the Layer Properties; go to the Style tab; check the box for “Control feature rendering order”; click the button on the right with the A Z and an arrow to enter the expression you want (I would order by area for example).

Why isn’t this the right solution for me?  I’m sharing a user-contributed dataset.  One of the goals is that anyone can use it.  When polygons are obscured, it makes the dataset just a little harder to use and understand, which means people won’t like using it.  Another goal is that anyone with a reasonable understanding of GIS can contribute.  If I have to write a bunch of instructions on how to visualize the data before they can add to the dataset, people are less likely to contribute.

Map showing all the polygons expected.

Now I can see all of the polygons because the larger ones are on the bottom and the smaller ones are on top.

My Solution

Hat tip to Alex Mandel and Ani Ghosh for spending a couple of hours with me hammering out a solution.  We used R because I already had a script that takes all of the polygons made by contributors and combines them into one file.  It made sense in this case to add a few lines to this post-processing code to re-order the polygons before sending the results to the GitHub repository.

What you need to know about rendering order & SP Objects

The order in which items in an SP object are rendered is controlled by the object’s ID value.  The ID value is hidden in the ID slot nested inside the polygons slot.  If you change these values, you change the order items are rendered.  ID = 1 goes first, ID =2 goes on top of 1, 3 goes on top of 2, and so on.  So for my case, assigning the IDs based on the area will get me the solution.

How

This Stack Exchange Post on re-ording spatial data was a big help in the process.  Note that every SP object should have the slots and general structure I used here.  There’s nothing special about this dataset.  If you’d like the dataset and another copy of the R code, however, it is in the UC Davis Library’s AVA repository.

#load the libraries you'll need
library(raster)
library(geojsonio)
library(rgdal)

### FIRST: Some context about how I made my dataset in the first place

# search in my working directory for the files inside the folders 
# called "avas" and "tbd"
avas <- list.files(path="./avas", pattern = "*json$", full.names = "TRUE")
tbd <- list.files(path="./tbd", pattern = "*json$", full.names = "TRUE")

#put the two lists into one list
gj <- c(avas, tbd)

#read all the geojson files & create an SP object
vects <- lapply(gj, geojson_read, what="sp")

#combine all the vectors together. bind() is from the raster package.
#probably could just rbind geojson lists too, but thats harder to plot
all <- do.call(bind, vects)

#Change any "N/A" data to nulls
all@data[all@data=="N/A"]<- NA


### SECOND: How I did the sorting

#Calculate area of polygons - needed for sorting purposes
# the function returns the value in the area slot of each row
all@data$area<-sapply(slot(all, "polygons"), function(i){slot(i, "area")})

#add the row names in a column - needed for sorting purposes
all$rows<-row.names(all)

#Order by area - row names & area are needed here
# decreasing = TRUE means we list the large polygons first
all<-all[match(all[order(all$area, decreasing = TRUE),]$rows, row.names(all@data)),]

#add new ID column - essentially you are numbering the rows 
# from 1 to the number of rows you have but in the order of 
# largest to smallest area
all$newid<-1:nrow(all)

#assign the new id to the ID field of each polygon
for (i in 1:nrow(all@data)){
 all@polygons[[i]]@ID<-as.character(all$newid[i])}

#drop the 3 columns we added for sorting purposes (optional)
all@data<-all@data[,1:(ncol(all@data)-3)]

#write the data to a file in my working directory
geojson_write(all, file="avas.geojson", overwrite=TRUE, convert_wgs84 = TRUE)

How to make a PostGIS TIGER Geocoder in Less than 5 Days

GeocoderWorking

It doesn’t look like much, but this image represents success!

I have been through 5 days of frustration wading through tutorials for how to make a Tiger geocoder in PostGIS.  My complaints, in a nutshell, are that every tutorial is either non-linear or assumes you already know what you’re doing… or both.  If I knew what I was doing, I wouldn’t need a tutorial.  And your notes on how you made it work are lovely, if only I could figure out what part of the process they pertain to.  Enough griping.  Here’s how I finally got this thing to work.

Note: I’m on Windows 10, PostGRESql 10, PostGIS 2.4.0 bundle.  This tutorial draws from many existing tutorials including postgis.net and this one with additional help from Alex Mandel.

What is a Tiger Geocoder?

Let’s start by breaking this down.  A geocoder is a tool composed of a set of reference data that you can use to estimate a geographic coordinate for any given address.  Tiger, in this case, does not refer to a large endangered cat, but rather the US Census’ TIGER spatial files.  This is the reference data we will use.  The process we’re about to embark on should make assembling this data easy by using tools that will help us download the data and put it into a PostGIS database.

Install Software & Prep Folders

You will need to install:

  1. postgreSQL 10 – the database program that’s going to make all of this work
    1. The installer should walk you through installation options.  Write down the port (keeping the default is fine) and password you choose.  Pick a password you like to type because it comes up a lot.
  2. PostGIS 2.4.x bundle – PostGIS makes postgreSQL have spatial capabilities (much like a super power) and install the bundle option because it comes with things you’ll need, saving a few steps
    1. Make sure you’ve installed PostGRESql first
    2. The installer should walk you through the installation options.
  3. wget 1.19.1 (or a relatively recent version) – this is a tool that the geocoder scripts will need to download files from the census
    1. Save the .zip option to your computer.
    2. Unzip it in your C:/ drive so it’s easy to access later.  Remember where you put it because you’ll need the path later.
  4. 7zip – this is an open source file unzipping tool that the scripts will use to unzip the files they download from the Census website.

You will need to create a folder on your computer to be your “staging folder”.  This is where the scripts we’ll run later will save the files they need to download.

  1. Make a folder called “gisdata” in a place where you have write permissions and can find it later.  For example: C:\gisdata
  2. Inside your gisdata folder, make another folder called “temp”. For example: C:\gisdata\temp

Make a Database

We need to make an empty database to put our files into.

Open PGAdmin 4.  This is the graphical user interface for postgreSQL that should have installed with the postgreSQL files.

If PGAdmin 4 doesn’t start or gives an error, you may need to start the service that makes it run.  In the Windows search bar, search “services”.  In the Services window, right click postgresql-x64-10 and choose “start”.

In the browser panel on the left side of the pgAdmin 4 window, click the + to expand Servers and PostgreSQL 10.  When it asks for your password, give it the password you wrote down during the installation process.

Right click on PostgreSQL 10 -> Create -> Database

In the Database name field, give it a name with lowercase letters.  I named mine geocoder.  Leave the owner drop-down on postgres.  Click Save.

Expand the Databases entry in the Browser panel to see your new database in the list.

Enable Extensions

Right now, our database is just a regular database without spatial functions.  We need to enable the PostGIS extension and the extensions that help us geocode.

Open the Query Tool: in the Tools menu at the top of the screen, select “Query Tool”.  The top section of the query tool is where you type SQL commands.  Underneath this is where the results of your queries are displayed.

Enable the PostGIS extension to give the database spatial capabilities by copying and pasting this code into the query tool and clicking the run button (it looks like a lightning bolt… why?  I don’t know.):

CREATE EXTENSION postgis;

The next three lines enable extensions that help with the geocoding process (run them one at a time):

CREATE EXTENSION fuzzystrmatch;
CREATE EXTENSION postgis_tiger_geocoder;
CREATE EXTENSION address_standardizer;

Let’s make sure the extensions loaded correctly. This line of code should take the address (in between the parentheses and single quotes) and break it down into it’s components (create a “normalized address”):

SELECT na.address, na.streetname,na.streettypeabbrev, na.zip
	FROM normalize_address('1 Devonshire Place, Boston, MA 02109') AS na;

The output should look something like this:

 address | streetname | streettypeabbrev |  zip
---------+------------+------------------+-------
       1 | Devonshire | Pl               | 02109

Edit the loader tables

The scripts we’re going to run (in a few steps from now) will need some information to run correctly.  The place they look for this information is a set of two tables that have been added to your database by the extensions we just enabled.

In the browser panel (on the left side of the window), expand your geocoder database, then the schemas list, the tiger list, and finally the Tables list.  Make sure you’re looking at the tables list inside of the the tiger list… each schema gets to have it’s own list of tables.

Right click on the loader_platform table, and select “View/Edit Data”, and then “All Rows”.  Now we can edit one of the entries to tell the scripts where to look for certain files and folders.

One row of the table has “windows” in the os (operating system) column.  In that row, double click the cell in the declare_sect column to open up a window that will let you edit the text.  For each line, you’ll need to make sure you type the path to the folder or file needed.  This is what mine looks like after editing:

set TMPDIR=C:\gisdata\temp
set UNZIPTOOL="C:\Program Files\7-Zip\7z.exe"
set WGETTOOL="C:\wget-1.19.1-win64\wget.exe"
set PGBIN=C:\Program Files\PostgreSQL\10\bin\
set PGPORT=5432
set PGHOST=localhost
set PGUSER=postgres
set PGPASSWORD=Password123
set PGDATABASE=geocoder
set PSQL="C:\Program Files\PostgreSQL\10\bin\psql.exe"
set SHP2PGSQL="C:\Program Files\PostgreSQL\10\bin\shp2pgsql.exe"
cd C:\gisdata

(No, that’s not my actual password.)  Note that some of the paths might be correct and others will not be, so check them all.  When you’re done, click Save and then close the table with the x button in the upper right corner (side note: I find the pgAdmin 4 interface to be rather unintuitive).  If it asks you to save your table, tell it yes.

Now open the loader_variables table.  Change the staging_folder to your chosen staging folder.  I hard-coded mine into the last table because it the scripts didn’t seem to be recognizing entries in this table, but change it anyway just to be sure.  Again, save and exit.

Make & Run the Scripts

Add postgreSQL path to Windows

Before we can use postgreSQL in the command line (sorry to spring that on you… deep breaths… we’ll work through this together), we need to make sure Windows knows about postgreSQL.

For Windows 7, this Stack Overflow post should help.  For Windows 8 and 10, try this one.

Make the Nation Scripts

Next, we are going to run a line of code that will automatically generate a script.  We’ll run that script to automatically download data from the Census and place it into your database.  (Yes, we are going to run code that makes bigger code that downloads data.)

Open a command line terminal (on Windows, search “cmd” in the search box and select the Command Prompt).  Copy and paste this line of code into your terminal window, changing the path to your staging folder (but keep the file name at the end), then hit enter to run the code:

psql -U postgres -c "SELECT Loader_Generate_Nation_Script('windows')" -d geocoder -tA > C:/gisdata/nation_script_load.bat

(Quick note about what this code does… “-U postgres” tells the command that the user name for the database we want to work with is “postgres”. “-d geocoder” tells it that the name of the database to use is “geocoder”. “SELECT Loader_Generate_Nation_Script” is a function that postGIS can use to make the script we’re going to need. The ‘windows’ argument actually tells it to read the line in the loader_platform table we edited earlier.)

The terminal will probably return this line:

Password for user postgres:

Type in your password, although it won’t show anything on the screen as you type, and hit enter.  A new prompt (the path for the folder you’re in and a >) will appear when it’s finished.  Your staging folder should now have a files called nation_script_load.bat  This new file is a batch file containing a series of commands for the computer to run that will download files from the Census’ website, unzip them, and add them to your database automatically.

Run the Nation Script

Running your new batch script in Windows, this is fairly straight forward (seriously, one step had to be, right?).

First, if you’re not already in the directory for your staging folder, change directories by running this command in the command line (How do you tell?  The command prompt shows the directory you’re in):

cd C:/gisdata

Now your command prompt should say C:/gisdata >

To run the script, in the command line, type

nation_script_load.bat

and hit enter to run your .bat file.  You will see a series of code run across your terminal window and it may open a 7zip dialog as it unzips files.  This could take a little while.

When it’s done, you should have a tiger_data schema with tables called county_all and state_all.  You can check to make sure the tables have data by running these lines in the pgAdmin 4 Query Tool:

Check the county_all Table:

SELECT count(*) FROM tiger_data.county_all;

Expected Results:

count
-------
 3233
(1 row)

Check the state_all table:

SELECT count(*) FROM tiger_data.state_all;

Expected Results:

 count
-------
 56
(1 row)

Make & Run the State Script

The process of making and running the state script are very similar to what we just did for the national script.  This script makes the tables for a state (or multiple states) that you specify.

In the command line, run this code to generate the scripts for California:

psql -U postgres -c "SELECT Loader_Generate_Script(ARRAY['CA'], 'windows')" -d geocoder -tA > C:/gisdata/ca_script_load.bat

Note that if you want additional states, add them to the bracketed list separated by commas.  For example, to download California and Nevada, you would run:

psql -U postgres -c "SELECT Loader_Generate_Script(ARRAY['CA', 'NV'], 'windows')" -d geocoder -tA > C:/gisdata/ca_nv_script_load.bat

Change directories back to your staging folder if needed by running this command in the command line:

cd C:/gisdata

Run the script by entering the name of the file into the command prompt:

ca_script_load.bat

The state script is going to take a while to load. It downloads many, many files. So go get a cup of coffee/tea/hot chocolate and a cookie while you wait.

Finally, we need to do some analyze all of the tiger data and update the stats for each table (reference). In the pgAdmin 4 query builder, run each of these lines separately:

SELECT install_missing_indexes();
vacuum analyze verbose tiger.addr;
vacuum analyze verbose tiger.edges;
vacuum analyze verbose tiger.faces;
vacuum analyze verbose tiger.featnames;
vacuum analyze verbose tiger.place;
vacuum analyze verbose tiger.cousub;
vacuum analyze verbose tiger.county;
vacuum analyze verbose tiger.state;
vacuum analyze verbose tiger.zip_lookup_base;
vacuum analyze verbose tiger.zip_state;
vacuum analyze verbose tiger.zip_state_loc;

Try it out

I know, it doesn’t seem like we’ve got a lot to show for all that work.  If you look in the tables for your Tiger schemas, you might notice that there’s more of them, but let’s get some concrete results.

Run this code in the pgAdmin 4 Query Builder to return a coordinate for an address in WKT (well known text).  Change the address to one in the state you’ve downloaded.  (More testing option here.)

SELECT g.rating, ST_AsText(ST_SnapToGrid(g.geomout,0.00001)) As wktlonlat,
(addy).address As stno, (addy).streetname As street,
(addy).streettypeabbrev As styp, (addy).location As city, (addy).stateabbrev As st,(addy).zip
FROM geocode('424 3rd St, Davis, CA 95616',1) As g;

It should return:

rating |          wktlonlat          | stno | street | styp | city    | st   | zip
0      | 'POINT(-121.74334 38.5441)' | 424  | '3rd'  | 'St' | 'Davis' | 'CA' | '95616'

(That’s the address for the Davis, CA downtown post office, in case you were wondering.)

You Did It!

You did it! I sincerely hope that at this point you have a geocoder that’s ready to use. Next up, batch geocoding!