Tag Archives: Science

Change SP Object Polygon Rendering Order in R

The Problem

I have a geoJSON file that was made by combining many (as in ~250) geoJSON files each containing a single polygon in R.  The polygons are of varying size and can overlap in space.  When I open the file in a GIS, often the smaller polygons are hidden under the larger ones, making displaying the data challenging, if not impossible.

Map showing few polygons when there should be many

I know there are more polygons than are visible in this area, so they must be hiding under the larger polygons.

Map showing larger polygons obscuring smaller polygons

With transparency set to 60% for each polygon (see the Draw Effects dialog in QGIS for the fill symbol layer), you can see that smaller polygons are hiding under larger ones.

The Goal

I would prefer that the polygons stack up so that the largest is on the bottom and the smallest is on the top.  This requires that I change the rendering order based on the area of each polygon.

The Quick Fix

QGIS has an option to control the rendering order.  Open the Layer Properties; go to the Style tab; check the box for “Control feature rendering order”; click the button on the right with the A Z and an arrow to enter the expression you want (I would order by area for example).

Why isn’t this the right solution for me?  I’m sharing a user-contributed dataset.  One of the goals is that anyone can use it.  When polygons are obscured, it makes the dataset just a little harder to use and understand, which means people won’t like using it.  Another goal is that anyone with a reasonable understanding of GIS can contribute.  If I have to write a bunch of instructions on how to visualize the data before they can add to the dataset, people are less likely to contribute.

Map showing all the polygons expected.

Now I can see all of the polygons because the larger ones are on the bottom and the smaller ones are on top.

My Solution

Hat tip to Alex Mandel and Ani Ghosh for spending a couple of hours with me hammering out a solution.  We used R because I already had a script that takes all of the polygons made by contributors and combines them into one file.  It made sense in this case to add a few lines to this post-processing code to re-order the polygons before sending the results to the GitHub repository.

What you need to know about rendering order & SP Objects

The order in which items in an SP object are rendered is controlled by the object’s ID value.  The ID value is hidden in the ID slot nested inside the polygons slot.  If you change these values, you change the order items are rendered.  ID = 1 goes first, ID =2 goes on top of 1, 3 goes on top of 2, and so on.  So for my case, assigning the IDs based on the area will get me the solution.

How

This Stack Exchange Post on re-ording spatial data was a big help in the process.  Note that every SP object should have the slots and general structure I used here.  There’s nothing special about this dataset.  If you’d like the dataset and another copy of the R code, however, it is in the UC Davis Library’s AVA repository.

#load the libraries you'll need
library(raster)
library(geojsonio)
library(rgdal)

### FIRST: Some context about how I made my dataset in the first place

# search in my working directory for the files inside the folders 
# called "avas" and "tbd"
avas <- list.files(path="./avas", pattern = "*json$", full.names = "TRUE")
tbd <- list.files(path="./tbd", pattern = "*json$", full.names = "TRUE")

#put the two lists into one list
gj <- c(avas, tbd)

#read all the geojson files & create an SP object
vects <- lapply(gj, geojson_read, what="sp")

#combine all the vectors together. bind() is from the raster package.
#probably could just rbind geojson lists too, but thats harder to plot
all <- do.call(bind, vects)

#Change any "N/A" data to nulls
all@data[all@data=="N/A"]<- NA


### SECOND: How I did the sorting

#Calculate area of polygons - needed for sorting purposes
# the function returns the value in the area slot of each row
all@data$area<-sapply(slot(all, "polygons"), function(i){slot(i, "area")})

#add the row names in a column - needed for sorting purposes
all$rows<-row.names(all)

#Order by area - row names & area are needed here
# decreasing = TRUE means we list the large polygons first
all<-all[match(all[order(all$area, decreasing = TRUE),]$rows, row.names(all@data)),]

#add new ID column - essentially you are numbering the rows 
# from 1 to the number of rows you have but in the order of 
# largest to smallest area
all$newid<-1:nrow(all)

#assign the new id to the ID field of each polygon
for (i in 1:nrow(all@data)){
 all@polygons[[i]]@ID<-as.character(all$newid[i])}

#drop the 3 columns we added for sorting purposes (optional)
all@data<-all@data[,1:(ncol(all@data)-3)]

#write the data to a file in my working directory
geojson_write(all, file="avas.geojson", overwrite=TRUE, convert_wgs84 = TRUE)
Advertisements

How to make a PostGIS TIGER Geocoder in Less than 5 Days

GeocoderWorking

It doesn’t look like much, but this image represents success!

I have been through 5 days of frustration wading through tutorials for how to make a Tiger geocoder in PostGIS.  My complaints, in a nutshell, are that every tutorial is either non-linear or assumes you already know what you’re doing… or both.  If I knew what I was doing, I wouldn’t need a tutorial.  And your notes on how you made it work are lovely, if only I could figure out what part of the process they pertain to.  Enough griping.  Here’s how I finally got this thing to work.

Note: I’m on Windows 10, PostGRESql 10, PostGIS 2.4.0 bundle.  This tutorial draws from many existing tutorials including postgis.net and this one with additional help from Alex Mandel.

What is a Tiger Geocoder?

Let’s start by breaking this down.  A geocoder is a tool composed of a set of reference data that you can use to estimate a geographic coordinate for any given address.  Tiger, in this case, does not refer to a large endangered cat, but rather the US Census’ TIGER spatial files.  This is the reference data we will use.  The process we’re about to embark on should make assembling this data easy by using tools that will help us download the data and put it into a PostGIS database.

Install Software & Prep Folders

You will need to install:

  1. postgreSQL 10 – the database program that’s going to make all of this work
    1. The installer should walk you through installation options.  Write down the port (keeping the default is fine) and password you choose.  Pick a password you like to type because it comes up a lot.
  2. PostGIS 2.4.x bundle – PostGIS makes postgreSQL have spatial capabilities (much like a super power) and install the bundle option because it comes with things you’ll need, saving a few steps
    1. Make sure you’ve installed PostGRESql first
    2. The installer should walk you through the installation options.
  3. wget 1.19.1 (or a relatively recent version) – this is a tool that the geocoder scripts will need to download files from the census
    1. Save the .zip option to your computer.
    2. Unzip it in your C:/ drive so it’s easy to access later.  Remember where you put it because you’ll need the path later.
  4. 7zip – this is an open source file unzipping tool that the scripts will use to unzip the files they download from the Census website.

You will need to create a folder on your computer to be your “staging folder”.  This is where the scripts we’ll run later will save the files they need to download.

  1. Make a folder called “gisdata” in a place where you have write permissions and can find it later.  For example: C:\gisdata
  2. Inside your gisdata folder, make another folder called “temp”. For example: C:\gisdata\temp

Make a Database

We need to make an empty database to put our files into.

Open PGAdmin 4.  This is the graphical user interface for postgreSQL that should have installed with the postgreSQL files.

If PGAdmin 4 doesn’t start or gives an error, you may need to start the service that makes it run.  In the Windows search bar, search “services”.  In the Services window, right click postgresql-x64-10 and choose “start”.

In the browser panel on the left side of the pgAdmin 4 window, click the + to expand Servers and PostgreSQL 10.  When it asks for your password, give it the password you wrote down during the installation process.

Right click on PostgreSQL 10 -> Create -> Database

In the Database name field, give it a name with lowercase letters.  I named mine geocoder.  Leave the owner drop-down on postgres.  Click Save.

Expand the Databases entry in the Browser panel to see your new database in the list.

Enable Extensions

Right now, our database is just a regular database without spatial functions.  We need to enable the PostGIS extension and the extensions that help us geocode.

Open the Query Tool: in the Tools menu at the top of the screen, select “Query Tool”.  The top section of the query tool is where you type SQL commands.  Underneath this is where the results of your queries are displayed.

Enable the PostGIS extension to give the database spatial capabilities by copying and pasting this code into the query tool and clicking the run button (it looks like a lightning bolt… why?  I don’t know.):

CREATE EXTENSION postgis;

The next three lines enable extensions that help with the geocoding process (run them one at a time):

CREATE EXTENSION fuzzystrmatch;
CREATE EXTENSION postgis_tiger_geocoder;
CREATE EXTENSION address_standardizer;

Let’s make sure the extensions loaded correctly. This line of code should take the address (in between the parentheses and single quotes) and break it down into it’s components (create a “normalized address”):

SELECT na.address, na.streetname,na.streettypeabbrev, na.zip
	FROM normalize_address('1 Devonshire Place, Boston, MA 02109') AS na;

The output should look something like this:

 address | streetname | streettypeabbrev |  zip
---------+------------+------------------+-------
       1 | Devonshire | Pl               | 02109

Edit the loader tables

The scripts we’re going to run (in a few steps from now) will need some information to run correctly.  The place they look for this information is a set of two tables that have been added to your database by the extensions we just enabled.

In the browser panel (on the left side of the window), expand your geocoder database, then the schemas list, the tiger list, and finally the Tables list.  Make sure you’re looking at the tables list inside of the the tiger list… each schema gets to have it’s own list of tables.

Right click on the loader_platform table, and select “View/Edit Data”, and then “All Rows”.  Now we can edit one of the entries to tell the scripts where to look for certain files and folders.

One row of the table has “windows” in the os (operating system) column.  In that row, double click the cell in the declare_sect column to open up a window that will let you edit the text.  For each line, you’ll need to make sure you type the path to the folder or file needed.  This is what mine looks like after editing:

set TMPDIR=C:\gisdata\temp
set UNZIPTOOL="C:\Program Files\7-Zip\7z.exe"
set WGETTOOL="C:\wget-1.19.1-win64\wget.exe"
set PGBIN=C:\Program Files\PostgreSQL\10\bin\
set PGPORT=5432
set PGHOST=localhost
set PGUSER=postgres
set PGPASSWORD=Password123
set PGDATABASE=geocoder
set PSQL="C:\Program Files\PostgreSQL\10\bin\psql.exe"
set SHP2PGSQL="C:\Program Files\PostgreSQL\10\bin\shp2pgsql.exe"
cd C:\gisdata

(No, that’s not my actual password.)  Note that some of the paths might be correct and others will not be, so check them all.  When you’re done, click Save and then close the table with the x button in the upper right corner (side note: I find the pgAdmin 4 interface to be rather unintuitive).  If it asks you to save your table, tell it yes.

Now open the loader_variables table.  Change the staging_folder to your chosen staging folder.  I hard-coded mine into the last table because it the scripts didn’t seem to be recognizing entries in this table, but change it anyway just to be sure.  Again, save and exit.

Make & Run the Scripts

Add postgreSQL path to Windows

Before we can use postgreSQL in the command line (sorry to spring that on you… deep breaths… we’ll work through this together), we need to make sure Windows knows about postgreSQL.

For Windows 7, this Stack Overflow post should help.  For Windows 8 and 10, try this one.

Make the Nation Scripts

Next, we are going to run a line of code that will automatically generate a script.  We’ll run that script to automatically download data from the Census and place it into your database.  (Yes, we are going to run code that makes bigger code that downloads data.)

Open a command line terminal (on Windows, search “cmd” in the search box and select the Command Prompt).  Copy and paste this line of code into your terminal window, changing the path to your staging folder (but keep the file name at the end), then hit enter to run the code:

psql -U postgres -c "SELECT Loader_Generate_Nation_Script('windows')" -d geocoder -tA > C:/gisdata/nation_script_load.bat

(Quick note about what this code does… “-U postgres” tells the command that the user name for the database we want to work with is “postgres”. “-d geocoder” tells it that the name of the database to use is “geocoder”. “SELECT Loader_Generate_Nation_Script” is a function that postGIS can use to make the script we’re going to need. The ‘windows’ argument actually tells it to read the line in the loader_platform table we edited earlier.)

The terminal will probably return this line:

Password for user postgres:

Type in your password, although it won’t show anything on the screen as you type, and hit enter.  A new prompt (the path for the folder you’re in and a >) will appear when it’s finished.  Your staging folder should now have a files called nation_script_load.bat  This new file is a batch file containing a series of commands for the computer to run that will download files from the Census’ website, unzip them, and add them to your database automatically.

Run the Nation Script

Running your new batch script in Windows, this is fairly straight forward (seriously, one step had to be, right?).

First, if you’re not already in the directory for your staging folder, change directories by running this command in the command line (How do you tell?  The command prompt shows the directory you’re in):

cd C:/gisdata

Now your command prompt should say C:/gisdata >

To run the script, in the command line, type

nation_script_load.bat

and hit enter to run your .bat file.  You will see a series of code run across your terminal window and it may open a 7zip dialog as it unzips files.  This could take a little while.

When it’s done, you should have a tiger_data schema with tables called county_all and state_all.  You can check to make sure the tables have data by running these lines in the pgAdmin 4 Query Tool:

Check the county_all Table:

SELECT count(*) FROM tiger_data.county_all;

Expected Results:

count
-------
 3233
(1 row)

Check the state_all table:

SELECT count(*) FROM tiger_data.state_all;

Expected Results:

 count
-------
 56
(1 row)

Make & Run the State Script

The process of making and running the state script are very similar to what we just did for the national script.  This script makes the tables for a state (or multiple states) that you specify.

In the command line, run this code to generate the scripts for California:

psql -U postgres -c "SELECT Loader_Generate_Script(ARRAY['CA'], 'windows')" -d geocoder -tA > C:/gisdata/ca_script_load.bat

Note that if you want additional states, add them to the bracketed list separated by commas.  For example, to download California and Nevada, you would run:

psql -U postgres -c "SELECT Loader_Generate_Script(ARRAY['CA', 'NV'], 'windows')" -d geocoder -tA > C:/gisdata/ca_nv_script_load.bat

Change directories back to your staging folder if needed by running this command in the command line:

cd C:/gisdata

Run the script by entering the name of the file into the command prompt:

ca_script_load.bat

The state script is going to take a while to load. It downloads many, many files. So go get a cup of coffee/tea/hot chocolate and a cookie while you wait.

Finally, we need to do some analyze all of the tiger data and update the stats for each table (reference). In the pgAdmin 4 query builder, run each of these lines separately:

SELECT install_missing_indexes();
vacuum analyze verbose tiger.addr;
vacuum analyze verbose tiger.edges;
vacuum analyze verbose tiger.faces;
vacuum analyze verbose tiger.featnames;
vacuum analyze verbose tiger.place;
vacuum analyze verbose tiger.cousub;
vacuum analyze verbose tiger.county;
vacuum analyze verbose tiger.state;
vacuum analyze verbose tiger.zip_lookup_base;
vacuum analyze verbose tiger.zip_state;
vacuum analyze verbose tiger.zip_state_loc;

Try it out

I know, it doesn’t seem like we’ve got a lot to show for all that work.  If you look in the tables for your Tiger schemas, you might notice that there’s more of them, but let’s get some concrete results.

Run this code in the pgAdmin 4 Query Builder to return a coordinate for an address in WKT (well known text).  Change the address to one in the state you’ve downloaded.  (More testing option here.)

SELECT g.rating, ST_AsText(ST_SnapToGrid(g.geomout,0.00001)) As wktlonlat,
(addy).address As stno, (addy).streetname As street,
(addy).streettypeabbrev As styp, (addy).location As city, (addy).stateabbrev As st,(addy).zip
FROM geocode('424 3rd St, Davis, CA 95616',1) As g;

It should return:

rating |          wktlonlat          | stno | street | styp | city    | st   | zip
0      | 'POINT(-121.74334 38.5441)' | 424  | '3rd'  | 'St' | 'Davis' | 'CA' | '95616'

(That’s the address for the Davis, CA downtown post office, in case you were wondering.)

You Did It!

You did it! I sincerely hope that at this point you have a geocoder that’s ready to use. Next up, I need to figure out how to geocode a bunch of addresses. More on that soon!

 

 


Dealing with Factors in R

What is the deal with the data type “Factor” in R?  It has a purpose and I know that a number of packages use this format, however, I often find that (1) my data somehow ends up in the format and (2) it’s not what I want.

My goal for this post: to write down what I’ve learned (this time, again!) before I forget and have to learn it all over again next time (just like all the other times).  If you found this, I hope it’s helpful and that you came here before you started tearing your hair out, yelling at the computer, or banging your head on the desk.

So here we go.  Add your ways to deal with factors in the comments and I’ll update the page as needed.

Avoid Creating Factors

Number 1 best way to deal with factors (when you don’t need them) is to not create them in the first place!  When you import a csv or other similar data, use the option stringsAsFactors = FALSE (or similar… read the docs for find the options for the command you’re using) to make sure your string data isn’t converted automatically to a factor.  R will sometimes also convert what seems to clearly be numerical data to a factor as well, so even if you only have numbers, you may still need this option.

MyData<-read.csv(file="SomeData.csv", header=TRUE, stringsAsFactors = FALSE)

Convert Data

Ok, but what if creating a factor is unavoidable?  You can convert it.  It’s not intuitive so I keep forgetting.  Wrap your factor in an as.character() to just get the data.  It’s now in string format, so if you need numbers, wrap all of that in as.numeric().

#Convert from a factor to a list
CharacterData<-as.character(MyFactor)

#Convert from a factor to numerical data
NumericalData<-as.numeric(as.character(MyFactor))

 

What’s Missing?

Do you have any other tricks to working with data that ends up as a Factor?  Let me know in the comments!


Making of a Moon Tree Map

29_CleanUp.png

I’m presenting a workflow for finishing maps in Inkscape at FOSS4G North America this year (2016). To really show the process effectively, I made a map and took screenshots along the way.

The Data

I decided to work with Moon Tree location data.  It’s quirky and interesting… and given that this is a geek conference I figured the space reference would be appreciated.  A few months ago I learned about Moon Trees watching an episode of Huell Howser on KVIE Public Television and then visited the one on the California State Capitol grounds.  I later learned from my aunt that my grandfather was a part of the telemetry crew that retrieved the Apollo 14 mission that carried the seeds that would become the Moon Trees, so there’s something of a connection to this idea.  Followers of my research also know that I’m a plant person, particularly plant geography.  So this seemed like the perfect dataset.  I was fortunate to find that Heather Archuletta had already digitized the locations of public trees and made them available in KML format.

Data Processing

The KML format is great for some applications (particularly Google maps, for which it was designed) but it poses some challenges.  I spent several hours… maybe more than I want to admit… formatting the .dbf to make the shapefile more useful for my purposes.  I created columns and standardized the content.  The map does not present all the data available (um… duh.).  It was challenge enough getting all this onto one page.

Yes, Inkscape is Necessary

You can’t make this map in QGIS completely.  I mean, normally you can make some fantastic maps in QGIS, but this one is actually not possible.  Right now, QGIS can’t handle having map frames with different projections.  I tried, but I found that even when the map composer looked right, the export in all three export options changed the projection and center of each frame to match that of the last active frame.  So I ended up with a layout with three zoom levels centered on Brazil… interesting, but not what I had in mind.  So I exported an .svg file three times from the map composer – one for each map frame – and put them together in Inkscape.

Sneaky Cartography

One of the methods I often use in my maps is to create subtle blurred halos behind text or icons that might otherwise get lost on a busy background.  I don’t like when the viewer sees the halos (maybe it’s from teaching ArcMap far too many years at universities).  It’s not quite a pet peeve, but I think there’s often better ways to handle busy backgrounds and readability.  My blog, my soapbox.  Can you spot them?  There are a couple in the map and in the final slide of the pitch video.  It doesn’t look like much, but I promise the text is easier to read.

The texture on the continents is the moon.  I clipped a photo of the moon using the continent outlines.  I liked the idea of trees on the moon.

Icons

The icons are special to me.  I’ve been really wanting to make a map using images from Phylopic and I thought this was the perfect opportunity… but… but… no one had uploaded outlines for any of the species I needed.  So I made them and uploaded them.  So if you want an .svg of these, help yourself.  If, however, you need dinosaurs, they’ve got you covered.

Watch it happen:

My pitch video captures the process from start to finish:

 Want more open source cartography?

Come to FOSS4G North America and see my and several other talks focused on cartography.  I’ll cover methods and tools in Inkscape common for cartography.


Accidental Ferns

I have this moss terrarium that is not doing that well.  It’s never done well.  I guess moss doesn’t really want to live in a jar.  Probably more than a year ago, I dropped a maidenhair fern frond in there that was loaded with spores.  I thought it might sprout, but after the leaf decayed, nothing happened so I forgot about it… not that I really knew what fern sprouts looked like.  Months ago, I started seeing this strange structure growing out of my dying moss clumps.  It kind of looks like tiny kelp.  I thought maybe it was a liverwort or some moss structure that grows from moss in some last attempt to live.  My “moss kelp” eventually grew some branches, so I thought it was making spores.

IMG_3354

From left to right: some scraggly moss, young maidenhair ferns, and the fern prothallia

Fast forward to today.  My maidenhair fern in my office (a division of the one mentioned earlier) is dropping spores all over the window sill, so I did some internet research on how to grow ferns from spores.  That’s when I discovered I’ve actually already done it.  The “moss kelp” is the prothallium or the gametophyte of the fern (the structure where fertilization happens).  From the prothallium, the fern that we recognize grows.  Now I wonder if I can do it again on purpose.

A note on the photograph: photographing prothallia is really difficult.  I was frustrated at the lack of good photos online, but now I understand, so please excuse my lack of detail in my photo.  I may try to get some better macro photos later.


Spatially Enabled Zotero Database

As a geographer, I’m a visual person.  I like to see distributions on a map and where things are matters to me.  A few years ago, while I was writing a paper I became overwhelmed with trying to remember the locations for the studies I had read (for coastal plants, latitude matters), so I started marking the locations of studies on a map and eventually turned it into a printed map.

CGS_Smaller

But adding new studies and sharing the results is a cumbersome and the spatial data is largely separate from the citation information.  So I set out to find a way to store spatial information in my citation database and access the spatial information for mapping purposes.  The end result (which is still a work in progress at press time) is a web map of coastal vegetation literature that updates when new citations are added to my Zotero database online.

Thumb_LiteratureMap

How I Did It:

Key ingredients: Zotero, QGIS, Spatialite, Zotero Online Account

I started working with the Zotero database I already have populated with literature relevant to my research on coastal vegetation.  I moved citations that I wanted to map into a separate folder just to make the API queries easier later.  I made a point in a shapefile for the location of each study using QGIS.  I gave the attribute table fields for the in-text citation and a text description of the location for human-readability, but the most important field is the ZoteroKey.  This is the item key that uniquely identifies each record in the Zotero database.  To find the key for each citation, in your local version of Zotero, right click on the record and pick “generate report”.  The text for the key is after the underscore in the URL for the report.  In the online version, click the citation in your list.  The key is at the end of the URL in the page that opens.

QGIS_Screenshot

My map only has point geometries right now, but that will change in the coming weeks.

The spatial information was then to be added to the Zotero database (specific queries can be found on GitHub) in Spatialite.  The Zotero schema is quite large but not impossible to navigate.  Currently, there is no option to add your own fields to Zotero (I tried… I failed… they tell me the option is coming soon) so I put my geometries into the “Extra” field.  Using Spatialite, I opened the Zotero database and imported my shapefile of citation locations (having new tables doesn’t break the database, thank goodness).  Then I removed any existing information in the “Extra” field and filled it in with geometry information in the style of geoJSON.  The string looks like this:

{"type": "Point", "coordinates": [-123.069403678033, 38.3159528822055]}

After updating the citation records to house the geometries, I synced the changes to my online Zotero repository from my desktop program.  Now it’s ready to go into a web map using the Zotero API.  My webmap code can be found in my GitHub Repository.

What’s Next?

I would like to develop a plug-in for QGIS that makes adding the geometries to the Zotero database easier because not everyone wants to run SQL queries on their active citation database that has been years in the making (I backed mine up first!).  The interface would show the citations you want to map, then users would pick a citation, then click the location on their QGIS project where the citations should be located.  The plug-in would insert the corresponding geometry for them.


Batch Editing Text in Inkscape

(Sarcasm!) Thanks, R & Inkscape!  I totally wanted to mark my outliers with the letter q!

(Sarcasm!) Thanks, R & Inkscape! I totally wanted to mark my outliers with the letter q!

Have you ever opened a PDF of a graph made in R in Inkscape?  For some reason, it appears that my graphs are made with characters from the Dingbats font, which I guess Inkscape doesn’t like, so when I open the file in Inkscape, it changes all of my nice circles to the letter q.  That’s awesome.  How do you fix that?  Inkscape doesn’t let you batch change the text inside a textbox, as far as I can tell.  So, here’s one way to fix it:

  1. Open the PDF file in Inkscape and save it as an SVG file.  Yes, you’ve now got qs instead of os.  It will be ok.
  2. Close the file.
  3. Open the SVG in a text editor (I like Notepad++).
  4. Do a Find & Replace, finding “q” and replacing with “o”.  I recommend reviewing each instance it finds rather than using “replace all”, because one of those qs might be something else.  Here’s the general kind of text you’re looking for: id=”tspan4049″>q</tspan></text> See that q?  Change it to a o.
  5. Save the files and close.
  6. Open it up in Inkscape and see what you’ve got.

If anyone has a more elegant way to do this, let me know!  I’m sure there’s a way to change the R output from the start so you don’t have this problem, but this solution was quicker than messing with R for now.

All fixed!

All fixed!