Tag Archives: Open Source

Change SP Object Polygon Rendering Order in R

The Problem

I have a geoJSON file that was made by combining many (as in ~250) geoJSON files each containing a single polygon in R.  The polygons are of varying size and can overlap in space.  When I open the file in a GIS, often the smaller polygons are hidden under the larger ones, making displaying the data challenging, if not impossible.

Map showing few polygons when there should be many

I know there are more polygons than are visible in this area, so they must be hiding under the larger polygons.

Map showing larger polygons obscuring smaller polygons

With transparency set to 60% for each polygon (see the Draw Effects dialog in QGIS for the fill symbol layer), you can see that smaller polygons are hiding under larger ones.

The Goal

I would prefer that the polygons stack up so that the largest is on the bottom and the smallest is on the top.  This requires that I change the rendering order based on the area of each polygon.

The Quick Fix

QGIS has an option to control the rendering order.  Open the Layer Properties; go to the Style tab; check the box for “Control feature rendering order”; click the button on the right with the A Z and an arrow to enter the expression you want (I would order by area for example).

Why isn’t this the right solution for me?  I’m sharing a user-contributed dataset.  One of the goals is that anyone can use it.  When polygons are obscured, it makes the dataset just a little harder to use and understand, which means people won’t like using it.  Another goal is that anyone with a reasonable understanding of GIS can contribute.  If I have to write a bunch of instructions on how to visualize the data before they can add to the dataset, people are less likely to contribute.

Map showing all the polygons expected.

Now I can see all of the polygons because the larger ones are on the bottom and the smaller ones are on top.

My Solution

Hat tip to Alex Mandel and Ani Ghosh for spending a couple of hours with me hammering out a solution.  We used R because I already had a script that takes all of the polygons made by contributors and combines them into one file.  It made sense in this case to add a few lines to this post-processing code to re-order the polygons before sending the results to the GitHub repository.

What you need to know about rendering order & SP Objects

The order in which items in an SP object are rendered is controlled by the object’s ID value.  The ID value is hidden in the ID slot nested inside the polygons slot.  If you change these values, you change the order items are rendered.  ID = 1 goes first, ID =2 goes on top of 1, 3 goes on top of 2, and so on.  So for my case, assigning the IDs based on the area will get me the solution.

How

This Stack Exchange Post on re-ording spatial data was a big help in the process.  Note that every SP object should have the slots and general structure I used here.  There’s nothing special about this dataset.  If you’d like the dataset and another copy of the R code, however, it is in the UC Davis Library’s AVA repository.

#load the libraries you'll need
library(raster)
library(geojsonio)
library(rgdal)

### FIRST: Some context about how I made my dataset in the first place

# search in my working directory for the files inside the folders 
# called "avas" and "tbd"
avas <- list.files(path="./avas", pattern = "*json$", full.names = "TRUE")
tbd <- list.files(path="./tbd", pattern = "*json$", full.names = "TRUE")

#put the two lists into one list
gj <- c(avas, tbd)

#read all the geojson files & create an SP object
vects <- lapply(gj, geojson_read, what="sp")

#combine all the vectors together. bind() is from the raster package.
#probably could just rbind geojson lists too, but thats harder to plot
all <- do.call(bind, vects)

#Change any "N/A" data to nulls
all@data[all@data=="N/A"]<- NA


### SECOND: How I did the sorting

#Calculate area of polygons - needed for sorting purposes
# the function returns the value in the area slot of each row
all@data$area<-sapply(slot(all, "polygons"), function(i){slot(i, "area")})

#add the row names in a column - needed for sorting purposes
all$rows<-row.names(all)

#Order by area - row names & area are needed here
# decreasing = TRUE means we list the large polygons first
all<-all[match(all[order(all$area, decreasing = TRUE),]$rows, row.names(all@data)),]

#add new ID column - essentially you are numbering the rows 
# from 1 to the number of rows you have but in the order of 
# largest to smallest area
all$newid<-1:nrow(all)

#assign the new id to the ID field of each polygon
for (i in 1:nrow(all@data)){
 all@polygons[[i]]@ID<-as.character(all$newid[i])}

#drop the 3 columns we added for sorting purposes (optional)
all@data<-all@data[,1:(ncol(all@data)-3)]

#write the data to a file in my working directory
geojson_write(all, file="avas.geojson", overwrite=TRUE, convert_wgs84 = TRUE)
Advertisements

How to make a PostGIS TIGER Geocoder in Less than 5 Days

GeocoderWorking

It doesn’t look like much, but this image represents success!

I have been through 5 days of frustration wading through tutorials for how to make a Tiger geocoder in PostGIS.  My complaints, in a nutshell, are that every tutorial is either non-linear or assumes you already know what you’re doing… or both.  If I knew what I was doing, I wouldn’t need a tutorial.  And your notes on how you made it work are lovely, if only I could figure out what part of the process they pertain to.  Enough griping.  Here’s how I finally got this thing to work.

Note: I’m on Windows 10, PostGRESql 10, PostGIS 2.4.0 bundle.  This tutorial draws from many existing tutorials including postgis.net and this one with additional help from Alex Mandel.

What is a Tiger Geocoder?

Let’s start by breaking this down.  A geocoder is a tool composed of a set of reference data that you can use to estimate a geographic coordinate for any given address.  Tiger, in this case, does not refer to a large endangered cat, but rather the US Census’ TIGER spatial files.  This is the reference data we will use.  The process we’re about to embark on should make assembling this data easy by using tools that will help us download the data and put it into a PostGIS database.

Install Software & Prep Folders

You will need to install:

  1. postgreSQL 10 – the database program that’s going to make all of this work
    1. The installer should walk you through installation options.  Write down the port (keeping the default is fine) and password you choose.  Pick a password you like to type because it comes up a lot.
  2. PostGIS 2.4.x bundle – PostGIS makes postgreSQL have spatial capabilities (much like a super power) and install the bundle option because it comes with things you’ll need, saving a few steps
    1. Make sure you’ve installed PostGRESql first
    2. The installer should walk you through the installation options.
  3. wget 1.19.1 (or a relatively recent version) – this is a tool that the geocoder scripts will need to download files from the census
    1. Save the .zip option to your computer.
    2. Unzip it in your C:/ drive so it’s easy to access later.  Remember where you put it because you’ll need the path later.
  4. 7zip – this is an open source file unzipping tool that the scripts will use to unzip the files they download from the Census website.

You will need to create a folder on your computer to be your “staging folder”.  This is where the scripts we’ll run later will save the files they need to download.

  1. Make a folder called “gisdata” in a place where you have write permissions and can find it later.  For example: C:\gisdata
  2. Inside your gisdata folder, make another folder called “temp”. For example: C:\gisdata\temp

Make a Database

We need to make an empty database to put our files into.

Open PGAdmin 4.  This is the graphical user interface for postgreSQL that should have installed with the postgreSQL files.

If PGAdmin 4 doesn’t start or gives an error, you may need to start the service that makes it run.  In the Windows search bar, search “services”.  In the Services window, right click postgresql-x64-10 and choose “start”.

In the browser panel on the left side of the pgAdmin 4 window, click the + to expand Servers and PostgreSQL 10.  When it asks for your password, give it the password you wrote down during the installation process.

Right click on PostgreSQL 10 -> Create -> Database

In the Database name field, give it a name with lowercase letters.  I named mine geocoder.  Leave the owner drop-down on postgres.  Click Save.

Expand the Databases entry in the Browser panel to see your new database in the list.

Enable Extensions

Right now, our database is just a regular database without spatial functions.  We need to enable the PostGIS extension and the extensions that help us geocode.

Open the Query Tool: in the Tools menu at the top of the screen, select “Query Tool”.  The top section of the query tool is where you type SQL commands.  Underneath this is where the results of your queries are displayed.

Enable the PostGIS extension to give the database spatial capabilities by copying and pasting this code into the query tool and clicking the run button (it looks like a lightning bolt… why?  I don’t know.):

CREATE EXTENSION postgis;

The next three lines enable extensions that help with the geocoding process (run them one at a time):

CREATE EXTENSION fuzzystrmatch;
CREATE EXTENSION postgis_tiger_geocoder;
CREATE EXTENSION address_standardizer;

Let’s make sure the extensions loaded correctly. This line of code should take the address (in between the parentheses and single quotes) and break it down into it’s components (create a “normalized address”):

SELECT na.address, na.streetname,na.streettypeabbrev, na.zip
	FROM normalize_address('1 Devonshire Place, Boston, MA 02109') AS na;

The output should look something like this:

 address | streetname | streettypeabbrev |  zip
---------+------------+------------------+-------
       1 | Devonshire | Pl               | 02109

Edit the loader tables

The scripts we’re going to run (in a few steps from now) will need some information to run correctly.  The place they look for this information is a set of two tables that have been added to your database by the extensions we just enabled.

In the browser panel (on the left side of the window), expand your geocoder database, then the schemas list, the tiger list, and finally the Tables list.  Make sure you’re looking at the tables list inside of the the tiger list… each schema gets to have it’s own list of tables.

Right click on the loader_platform table, and select “View/Edit Data”, and then “All Rows”.  Now we can edit one of the entries to tell the scripts where to look for certain files and folders.

One row of the table has “windows” in the os (operating system) column.  In that row, double click the cell in the declare_sect column to open up a window that will let you edit the text.  For each line, you’ll need to make sure you type the path to the folder or file needed.  This is what mine looks like after editing:

set TMPDIR=C:\gisdata\temp
set UNZIPTOOL="C:\Program Files\7-Zip\7z.exe"
set WGETTOOL="C:\wget-1.19.1-win64\wget.exe"
set PGBIN=C:\Program Files\PostgreSQL\10\bin\
set PGPORT=5432
set PGHOST=localhost
set PGUSER=postgres
set PGPASSWORD=Password123
set PGDATABASE=geocoder
set PSQL="C:\Program Files\PostgreSQL\10\bin\psql.exe"
set SHP2PGSQL="C:\Program Files\PostgreSQL\10\bin\shp2pgsql.exe"
cd C:\gisdata

(No, that’s not my actual password.)  Note that some of the paths might be correct and others will not be, so check them all.  When you’re done, click Save and then close the table with the x button in the upper right corner (side note: I find the pgAdmin 4 interface to be rather unintuitive).  If it asks you to save your table, tell it yes.

Now open the loader_variables table.  Change the staging_folder to your chosen staging folder.  I hard-coded mine into the last table because it the scripts didn’t seem to be recognizing entries in this table, but change it anyway just to be sure.  Again, save and exit.

Make & Run the Scripts

Add postgreSQL path to Windows

Before we can use postgreSQL in the command line (sorry to spring that on you… deep breaths… we’ll work through this together), we need to make sure Windows knows about postgreSQL.

For Windows 7, this Stack Overflow post should help.  For Windows 8 and 10, try this one.

Make the Nation Scripts

Next, we are going to run a line of code that will automatically generate a script.  We’ll run that script to automatically download data from the Census and place it into your database.  (Yes, we are going to run code that makes bigger code that downloads data.)

Open a command line terminal (on Windows, search “cmd” in the search box and select the Command Prompt).  Copy and paste this line of code into your terminal window, changing the path to your staging folder (but keep the file name at the end), then hit enter to run the code:

psql -U postgres -c "SELECT Loader_Generate_Nation_Script('windows')" -d geocoder -tA > C:/gisdata/nation_script_load.bat

(Quick note about what this code does… “-U postgres” tells the command that the user name for the database we want to work with is “postgres”. “-d geocoder” tells it that the name of the database to use is “geocoder”. “SELECT Loader_Generate_Nation_Script” is a function that postGIS can use to make the script we’re going to need. The ‘windows’ argument actually tells it to read the line in the loader_platform table we edited earlier.)

The terminal will probably return this line:

Password for user postgres:

Type in your password, although it won’t show anything on the screen as you type, and hit enter.  A new prompt (the path for the folder you’re in and a >) will appear when it’s finished.  Your staging folder should now have a files called nation_script_load.bat  This new file is a batch file containing a series of commands for the computer to run that will download files from the Census’ website, unzip them, and add them to your database automatically.

Run the Nation Script

Running your new batch script in Windows, this is fairly straight forward (seriously, one step had to be, right?).

First, if you’re not already in the directory for your staging folder, change directories by running this command in the command line (How do you tell?  The command prompt shows the directory you’re in):

cd C:/gisdata

Now your command prompt should say C:/gisdata >

To run the script, in the command line, type

nation_script_load.bat

and hit enter to run your .bat file.  You will see a series of code run across your terminal window and it may open a 7zip dialog as it unzips files.  This could take a little while.

When it’s done, you should have a tiger_data schema with tables called county_all and state_all.  You can check to make sure the tables have data by running these lines in the pgAdmin 4 Query Tool:

Check the county_all Table:

SELECT count(*) FROM tiger_data.county_all;

Expected Results:

count
-------
 3233
(1 row)

Check the state_all table:

SELECT count(*) FROM tiger_data.state_all;

Expected Results:

 count
-------
 56
(1 row)

Make & Run the State Script

The process of making and running the state script are very similar to what we just did for the national script.  This script makes the tables for a state (or multiple states) that you specify.

In the command line, run this code to generate the scripts for California:

psql -U postgres -c "SELECT Loader_Generate_Script(ARRAY['CA'], 'windows')" -d geocoder -tA > C:/gisdata/ca_script_load.bat

Note that if you want additional states, add them to the bracketed list separated by commas.  For example, to download California and Nevada, you would run:

psql -U postgres -c "SELECT Loader_Generate_Script(ARRAY['CA', 'NV'], 'windows')" -d geocoder -tA > C:/gisdata/ca_nv_script_load.bat

Change directories back to your staging folder if needed by running this command in the command line:

cd C:/gisdata

Run the script by entering the name of the file into the command prompt:

ca_script_load.bat

The state script is going to take a while to load. It downloads many, many files. So go get a cup of coffee/tea/hot chocolate and a cookie while you wait.

Finally, we need to do some analyze all of the tiger data and update the stats for each table (reference). In the pgAdmin 4 query builder, run each of these lines separately:

SELECT install_missing_indexes();
vacuum analyze verbose tiger.addr;
vacuum analyze verbose tiger.edges;
vacuum analyze verbose tiger.faces;
vacuum analyze verbose tiger.featnames;
vacuum analyze verbose tiger.place;
vacuum analyze verbose tiger.cousub;
vacuum analyze verbose tiger.county;
vacuum analyze verbose tiger.state;
vacuum analyze verbose tiger.zip_lookup_base;
vacuum analyze verbose tiger.zip_state;
vacuum analyze verbose tiger.zip_state_loc;

Try it out

I know, it doesn’t seem like we’ve got a lot to show for all that work.  If you look in the tables for your Tiger schemas, you might notice that there’s more of them, but let’s get some concrete results.

Run this code in the pgAdmin 4 Query Builder to return a coordinate for an address in WKT (well known text).  Change the address to one in the state you’ve downloaded.  (More testing option here.)

SELECT g.rating, ST_AsText(ST_SnapToGrid(g.geomout,0.00001)) As wktlonlat,
(addy).address As stno, (addy).streetname As street,
(addy).streettypeabbrev As styp, (addy).location As city, (addy).stateabbrev As st,(addy).zip
FROM geocode('424 3rd St, Davis, CA 95616',1) As g;

It should return:

rating |          wktlonlat          | stno | street | styp | city    | st   | zip
0      | 'POINT(-121.74334 38.5441)' | 424  | '3rd'  | 'St' | 'Davis' | 'CA' | '95616'

(That’s the address for the Davis, CA downtown post office, in case you were wondering.)

You Did It!

You did it! I sincerely hope that at this point you have a geocoder that’s ready to use. Next up, I need to figure out how to geocode a bunch of addresses. More on that soon!

 

 


A Visual Summary of FOSS4G 2017

FOSS4G_2_08212017.jpg

It’s the afternoon of Saturday, August 19th.  I’m sitting near the back of an airplane wondering how I’m going to keep from going stir crazy on this almost 6 hour flight back to California.  As the plane takes off, I’m thinking about the last week at FOSS4G 2017 and images are flashing through my brain.  Ok, I think, once I can take some stuff out of my bag (neatly stowed under the seat in front of me), I’ll doodle for a while.  That should keep me busy for an hour or so.  5 hours later, I’ve almost finished this whole page and it’s just about time to land.

What struck me at the conference was how important the giving and sharing culture of our community is.  The news from Charlottesville and the US President’s response seemed impossible. I caught up with people I hadn’t seen in a year, met people in person that I’d only known on Twitter, and found potential collaborators for a pet project that needs more people.  I also found inspiration in many of the talks and came home wanting to get started on a thousand new things (except that this cold someone shared with me is preventing me from getting too much done yet).  The best experience though was when I got to share my skills with the community.  I taught a workshop (at Harvard!!!) to 20 incredibly skilled people and gave a talk to about 80 – both about cartography.  I hope that what I shared will help them with some aspect of their work.

While I think it’s clear to everyone how the coders contribute, I think we need to do a better job acknowledging the contributions of users.  After hearing a few presenters say they didn’t feel like they belonged because they were “just users”, I started speaking up during the question time telling the speaker how important their role in the community is.  Making every member of our community feel welcome and valued is key to our continued success.

We also need to do a better job with diversity.  The breakdown of attendees neatly avoided discussing race and gender.  A look around the room, probably told you everything you needed to know about those topics though.  How do we fix it?  I’m not sure, but if we keep the discussion going rather than igoring it, we’ll find the solution faster.

So, thank you to everyone who made FOSS4G 2017 possible



My art process:
(pencil: Mirado Black Warrior HB 2) sketch in an image
(pen: Pilot Rolling Ball Precise V7 fine or Pentel Sign Pen ST150 felt tip) ink in the sketch
(pen: Pentel Sign Pen SES15N brush tip) fill in the spaces between the images
Erase pencil


The Last Aphid: Another Charley Harper Inspired Quilt Block

The next block in my Charley Harper quilt is my rendition of the artist’s “The Last Aphid” which features four ladybugs staring down an aphid that they’ve cornered between them.  This block was a challenge because of the symmetry.  Everything has to be lined up or it looks wrong (accepting some error of course because it’s applique and it’s never going to be perfect).

LastAphid.jpg

Like the other blocks and quilt plan, I used Inkscape to design the pattern.

One tool that has helped me immensely through this block and the last was masking tape.  Yup.  Good ol’ masking tape.  It’s not to hold anything down, but rather to lift something up, namely cat hair.  My guy cat loves to get in the middle of anything I’m doing (case-in-point he’s currently sitting next to me and pushing the arrow keys as I try to type) and he’s a real big shedder.  I guess I should be glad he’s a short-hair.  Aside from just not looking that great, cat hair is a problem because it gets into the thread as I sew and causes it to snarl up into a knot more than it normally would.  To get rid of the cat hair, I stick the masking tape down on the fabric and pull it off; the cat hair comes with it.  It’s pretty much a cheap version of a lint roller.


Dealing with Factors in R

What is the deal with the data type “Factor” in R?  It has a purpose and I know that a number of packages use this format, however, I often find that (1) my data somehow ends up in the format and (2) it’s not what I want.

My goal for this post: to write down what I’ve learned (this time, again!) before I forget and have to learn it all over again next time (just like all the other times).  If you found this, I hope it’s helpful and that you came here before you started tearing your hair out, yelling at the computer, or banging your head on the desk.

So here we go.  Add your ways to deal with factors in the comments and I’ll update the page as needed.

Avoid Creating Factors

Number 1 best way to deal with factors (when you don’t need them) is to not create them in the first place!  When you import a csv or other similar data, use the option stringsAsFactors = FALSE (or similar… read the docs for find the options for the command you’re using) to make sure your string data isn’t converted automatically to a factor.  R will sometimes also convert what seems to clearly be numerical data to a factor as well, so even if you only have numbers, you may still need this option.

MyData<-read.csv(file="SomeData.csv", header=TRUE, stringsAsFactors = FALSE)

Convert Data

Ok, but what if creating a factor is unavoidable?  You can convert it.  It’s not intuitive so I keep forgetting.  Wrap your factor in an as.character() to just get the data.  It’s now in string format, so if you need numbers, wrap all of that in as.numeric().

#Convert from a factor to a list
CharacterData<-as.character(MyFactor)

#Convert from a factor to numerical data
NumericalData<-as.numeric(as.character(MyFactor))

 

What’s Missing?

Do you have any other tricks to working with data that ends up as a Factor?  Let me know in the comments!


Inkscape for Applique Sewing Patterns

Inkscape is a vector illustration program so most people think of it as an art program for producing slick graphics.  But it’s a really useful tool for planning an preparing for other art forms.  For example, I’ve been using it for sewing.  What?  Yes, sewing.  It’s incredibly useful for drawing patterns.  Recently I’ve been working on a needle turn applique quilt based on the work of Charley Harper, but for the past few years I’ve made felt Christmas ornaments for friends and family, for all of which I used Inkscape to draw the patterns.

If you’re familiar with Inkscape already, making applique patterns will be pretty straight forward. If you’re new to the program, I highly recommend working through a couple of tutorials.  Here’s my general workflow (yours may differ):

  1. Start with an image.  On Pinterest, great projects abound, but sometimes the post links to costly instructions, or no pattern at all.  I’ve also found things that I like the look of, but are a different scale – too big or too small.  Or, as with my latest project, I’m creating my own pattern pieces from an image.  Look for images with distinct polygons of colors.  Blended or faded areas are going to be harder to duplicate with applique unless you can find fabric with the right fade or you dye your own.
  2. Put the image into an Inkscape file and resize it to the size you want your final project to be.
  3. Draw polygons around each of the colors you see in your image.  You’ll want to think about how you’ll put the whole thing together as you trace, so think about how the layers will work together.  For example, if you have polka-dots, you’ll want to place the circles on top of a larger background color, not have a section of background color with holes cut out like Swiss cheese.

    Cat_PatternMaking

    Start by tracing out all of the sections you’ll need to cut from various colors of fabric.

  4. Start a new Inkscape file and make the size of the page whatever size you plan to print.  For those in the US, you’ll probably want US Letter Size.
  5. Copy your polygons from the first file and past them into the second.  (I find keeping both files is helpful later for placement of the pieces.)  Arrange all your polygons on the page so that none overlap.  For larger projects, I’ve made several files. If the same shape shows up multiple times in your pattern, for example maybe eyes or ears, you only need to include that shape once.

    Cat_PatternPieces

    One of 3 pages of pattern pieces for a larger work with many pieces.

  6. On each piece, I like to print the color of the fabric I plan to use and how many of this piece I need to cut out.
  7. If you have a really big pattern piece that’s bigger than your printable page size, there’s a solution.  Put the big pieces into one Inkscape file, then size the page to the content, giving it a reasonable margin for your printer.  Then save the file as a PDF.  Open the PDF and in the print options, pick Poster (or similar setting).  It will divide up the pattern into printable pages.  Then you can tape the pages together before you cut out the pattern.

    Cat_BigPieces

    My printer settings have an option to print large PDFs in pieces.  Yours probably has something similar.  Super useful for printing larger pattern pieces.

Bonus! Now when you’re placing your pieces, some stuff you can just eyeball and it will be fine.  In some situations though you might need to be more precise.  Because you have your original pattern tracing in Inkscape, you can go back to that file and measure the distance between items.  I set my units to inches and draw a line, then see how long my line is.  Super simple, but very effective.

Measure

The red line measures how long the vertical eye whisker is.

See the finished piece on a previous blog post.


Limp on a Limb: Another Charley Harper Inspired Quilt Block

This second block in my Charley Harper Quilt is inspired by the piece Limp on a Limb.  If you compare the original and the block, you’ll see that I’ve made some edits.  Most notably, I have decided (for now at least) to not include the leaf pattern in the background.  Repeated shapes are a hallmark of Harper’s work, so including the pattern would be more true to the work, but in reality, it would require extensive embroidery and I’m afraid that won’t hold up long-term, especially given the light weight of the fabric I’ve chosen for the background.  That being said, the fabric I chose is mottled green and I hope it at least gives the piece some more depth.

IMG_20160805_101552[1]

Example diagram from placing the cat’s eye wiskers.

For this block, I thought I would show some of the detail of how I transfer lines from the pattern to the piece.  All my patterns are digital svg files, which means I can measure the size of each object in Inkscape.  (I promise to write a post about this with more detail and hopefully convert some quilters to Inkscape quilt designers… but later.  Ok, it’s later. See the post here.)  I make measurements from a reference point, draw out a diagram, then transfer the measurements to the fabric using a chalk pencil (either white or blue depending on the color of the fabric).  Then I embroider.  It’s important to mark as little as possible on the fabric with the calk pencils, because the marks are hard to get out.

IMG_3862.JPG

Faint chalk pencil marks show where to embroider the eye whiskers.

When placing any object in a piece, whether it’s embroidery or a layer of fabric, I’ve found that it’s important to figure out what feature the new object needs to be inline with. For placing the eye whiskers, at first I was going to reference the corner of the eye. It seemed logical. Then I found that in the original piece, the left eye and whiskers don’t line up. What? But there’s always such precision in Harper’s work! But after some staring at the piece, I realized that the vertical line of both sets of eye whiskers intersects the point where the ear meets the head. Bingo! Now my whiskers are in the right spot.

img_3865

The finished piece.