Duke welcomes artist/illustrator Jennifer McCormick

McCormick_PortraitOn the last day of classes, December 4, the Duke community will have a very special treat: a visit from artist and certified medical illustrator Jennifer McCormick.  Jennifer has been actively exhibiting and speaking about her work for several years, including a recent TEDx talk at Wake Forest University and an exhibit at the Durham Arts Council.

knee_combinedIn Jennifer’s work as a medical illustrator, she partners with attorneys to create visualizations that explain complex injuries and medical procedures to jury members.  In her fine art, however, she builds on the histories and x-rays of patients to explore “an opportunity for healing, hope, and acceptance.”  Her unique pieces transform the original clinical imagery of the injury into gorgeous, natural, holistic scenes.  In her artist talks, she speaks of “the power of intention” and “our forgotten superpowers” to raise awareness of the importance of art and spirituality for healing.

McCormick-6WEBJennifer will join us for the final Visualization Friday Forum of the semester.  It will be an opportunity for visualization enthusiasts, clinicians, medical imaging specialists, legal scholars, and those interested in the intersection between health and art to gather together for a presentation and conversation.  The talk will occur in the standard time slot for the Visualization Friday Forum — noon on Friday, December 4 — but the location is changing to accommodate a larger audience.  For one week only, we will meet in Duke Hospital Lecture Hall 2003.

The Visualization Friday Forum is sponsored by the Duke University Libraries (Data and Visualization Services), Duke Information Science + Studies (ISS), and the DiVE group. Jennifer’s visit will also be sponsored by the Trent Center for Bioethics, Humanities & History of Medicine and Duke Law – Academic Technologies.

We are so excited Jennifer has agreed to travel to Duke for a visit.  Please mark your calendars for this event.  If you would like to speak with Jennifer about medical illustrations or the intersection between medicine and spirituality, please contact Angela Zoss.

Enter the 2016 Student Data Visualization Contest

2016 Student Data Visualization ContestCalling all Duke undergrad and grad students! Have you worked on a course or research project that included some kind of visualization? Maybe you made a map for a history class paper. Maybe you invented a new type of chart to summarize the results of your experiment. Maybe you played around with an infographic builder just for fun.

Now is the time to start thinking about submitting those visualizations to the Duke Student Data Visualization Contest. It’s easy — just grab a screenshot or export an image of your visualization, write up a short description explaining how you made it, and submit it using our Sakai project site (search for “2016 DataVis Contest”). The deadline is right after finals this fall, so just block in a little extra time at the end of the semester once you’re done with your final assignments and projects.

Not sure what would work as a good submission? Check out our Flickr gallery with examples from the past two years.

Not sure if you’re eligible? If were a Duke student (that is, enrolled in a degree-granting program, so no post-docs) any time during 2015, and you did the work while you were a student, you’re golden!

Want to know more about the technical details and submission instructions? Check out the full contest instruction site.

Shapefiles vs. Geodatabases

Ever wonder what the difference between a shapefile and a geodatabase is in GIS and why each storage format is used for different purposes?  It is important to decide which format to use before beginning your project so you do not have to convert many files midway through your project.

Basics About Shapefiles:

Shapefiles are simple storage formats that have been used in ArcMap since the 1990s when Esri created ArcView (the early version of ArcMap 10.3).  Therefore, shapefiles have many limitations such as:

  • Takes up more storage space on your computer than a geodatabase
  • Do not support names in fields longer than 10 characters
  • Cannot store date and time in the same field
  • Do not support raster files
  • Do not store NULL values in a field; when a value is NULL, a shapefile will use 0 instead

Users are allowed to create points, lines, and polygons with a shapefile.  One shapefile must have at least 3 files but most shapefiles have around 6 files.  A shapefile must have:

  • .shp – this file stores the geometry of the feature
  • .shx – this file stores the index of the geometry
  • .dbf – this file stores the attribute information for the feature

All files for the shapefile must be stored in the same location with the same name or else the shapefile will not load.  When a shapefile is opened in Windows Explorer it will look different than when opened in ArcCatalog.



Basics About Geodatabases:

Geodatabases allow users to thematically organize their data and store spatial databases, tables, and raster datasets.  There are two types of single user geodatabases: File Geodatabase and Personal Geodatabase.  File geodatabases have many benefits including:

  • 1 TB of storage limits of each dataset
  • Better performance capabilities than Personal Geodatabase
  • Many users can view data inside the File Geodatabase while the geodatabase is being edited by another user
  • The geodatabase can be compressed which helps reduce the geodatabases’ size on the disk

On the other hand, Personal Geodatabases were originally designed to be used in conjunction with Microsoft Access and the Geodatabase is stored as an Access file (.mdb).  Therefore Personal Geodatabases can be opened directly in Microsoft Access, but the entire geodatabase can only have 2 GB of storage.

To organize your data into themes you can create Feature Datasets within a geodatabase.  Feature datasets store Feature Classes (which are the equivalent to shapefiles) with the same coordinate system.  Like shapefiles, users can create points, lines, and polygons with feature classes; feature classes also have the ability to create annotation, and dimension features.


In order to create advanced datasets (such as add a network dataset, a geometric network, a terrain dataset, a parcel fabric, or run topology on an existing layer) in ArcGIS, you will need to create a Feature Dataset.

You will not be able to access any files of a File geodatabase in Windows Explorer.  When you do, the Durham_County geodatabase shown above will look like this:




  • When you copy shapefiles anytime, use ArcCatalog. If you use Windows Explorer and do not select all the files for a shapefile, the shapefile will be corrupt and will not load.
  • When using a geodatabase, use a File Geodatabase. There is more storage capacity, multiple users can view/read the database at the same time, and the file geodatabase runs tools and queries faster than a Personal Geodatabase.
  • Use a shapefile when you want to read the attribute table or when you have a one or two tools/processes you need to do. Long-term projects should be organized into a File Geodatabase and Feature Datasets.
  • Many files downloaded from the internet are shapefiles. To convert them into your geodatabase, right click the shapefile, click “Export,” and select “To Geodatabase (single).”


Welcoming our new Data Visualization Analyst — Eric Monson

EMonson2Data and Visualization Services is proud and excited to welcome Eric Monson, Ph.D., our newest staff member. Eric joins the team as our Data Visualization Analyst, working with Angela Zoss to provide support for data visualization across Duke’s campus and community.

Eric worked for several years under the supervision of Rachael Brady, who was the head of the Visualization Technology Group (now the Visualization and Interactive Systems group), the founder of the DiVE, and a hub for the visualization community at Duke. Though transitioning from work in applied physics, Eric quickly became an active member of the broader visualization research community, sharing his experiences developing interactive visualization applications through online forums and professional organizations. His natural design sense contributes to an elegant portfolio of past work, and his work on projects in both the sciences and the humanities gives him an extremely wide range of experience with different datasets, tools, and techniques.

ipca_webSince DVS began offering visualization services in 2012, Eric has been an active supporter and collaborator. While continuing to work as a Research Scientist, Eric has co-organized the Visualization Friday Forum speaker series, teamed up with Angela on instructional sessions, and been an active supporter of visualization events and initiatives. He is an experienced and patient instructor and will bring many years of consulting experience to bear in this new role.

Over the past three years, demand for visualization support has steadily increased at Duke. With an active workshop series, guest lectures in a variety of courses, individual and small-group consultations, and programming such as the Student Data Visualization Contest, DVS is very happy to be able to boast two staff members with visualization expertise. In the near future, we hope to increase our visualization workshop offerings and continue to identify powerful but easy-to-use tools and techniques that will meet the needs of Duke visualizers. Taking advantage of Eric’s background in sciences and humanities, DVS looks forward to being able to answer a broader range of questions and offer a more diverse set of solutions.

Please join us in welcoming Eric to the team!  As always, feel free to contact askdata@duke.edu with any questions or data-driven research needs.

DVS Fall Workshops

GenericWorkshops-01Data and Visualization Services is happy to announce its Fall 2015 Workshop Series.  With a range of workshops covering basic data skills to data visualization, we have a wide range of courses for different interests and skill levels..  New (and redesigned) workshops include:

  • OpenRefine: Data Mining and Transformations, Text Normalization
  • Historical GIS
  • Advanced Excel for Data Projects
  • Analysis with R
  • Webscraping and Gathering Data from Websites

Workshop descriptions and registration information are available at:





OpenRefine: Data Mining and Transformations, Text Normalization
Sep 9
Basic Data Cleaning and Analysis for Data Tables
Sep 15
Introduction to ArcGIS
Sep 16
Easy Interactive Charts and Maps with Tableau
Sep 18
Introduction to Stata
Sep 22
Historical GIS
Sep 23
Advanced Excel for Data Projects
Sep 28
Easy Interactive Charts and Maps with Tableau
Sep 29
Analysis with R
Sep 30
ArcGIS Online
Oct 1
Web Scraping and Gathering Data from Websites
Oct 2
Advanced Excel for Data Projects
Oct 6
Basic Data Cleaning and Analysis for Data Tables
Oct 7
Introduction to Stata
Oct 14
Introduction to ArcGIS
Oct 15
OpenRefine: Data Mining and Transformations, Text Normalization
Oct 20
Analysis with R
Oct 20



Ever have trouble conceptualizing your project workflow?  ModelBuilder  allows you to plan your project before you run any tools.  When using ModelBuilder in ESRI’s ArcMap, you create a workflow of your project by adding the data and tools you need.  To open ModelBuilder, click the ModelBuilder icon     (MB_Icon) in the Standard Toolbar.


Key Points Before You Build Your Model

ModelBuilder can only be created and saved in a toolbox.  In order to create your model, you first need to create a new toolbox in the Toolboxes, MyToolboxes folders in ArcCatalog.  Once you have a new toolbox, you will need to create a new Model; to do this, right click your newly created toolbox and select New, then Model.  When you wish to open an existing ModelBuilder, find your toolbox, right click your Model and select Edit.

In order to find the results of your model and the data created in the middle of your project workflow (also known as intermediate data), you will need to direct the data to any workspace or a Scratch Geodatabase.  To set your data results to a Scratch Geodatabase in ModelBuilder, click Model, then Model Properties.  A dialog box will open and you will want to select the Environments tab, Workspace category, and check Scratch Workspace.  Before closing the dialog box, select “Values” and navigate to your workspace or your geodatabase.


Building and Running a Model

To create a model, click the Add Data or Tool button (AddData).  Navigate to the SystemToolboxes, find the tool you wish to run, and add it to your model.  Double click the tool within the Model and its parameters will open.  Fill out the appropriate fields for the tool and select OK.

When the tools or variables are ready for processing, they will be colored blue, green, or yellow.  Blue variables are inputs, yellow variables are tools, and green variables are outputs.  When there is an error or the parameters have not been chosen, the variables will have no color.


Once you have your model built, click the Run icon (MBRun) to run the model.  Depending on the data and the amount of tools you run, the Model can take seconds or minutes to run.  You can also run one tool at a time; to do this, right click the tool and select “Run.”  When the Model is done running, the tools and outputs will have a gray background.  To find the results of your model, navigate to the Scratch Workspace you have set and add the shapefile or table to ArcMap or right-click the output variable before running the model and select “Add to Display.”

Applying ModelBuilder

The model above demonstrates how to take nationwide county data, North Carolina landmark data and North Carolina major roads data and find landmarks in Wake County that are within 1 mile of major roads.  The first tool in the model (Select Layer by Attribute tool) extracts Wake County from the nationwide counties polygon layer. 1

Once Wake County is extracted to a new layer, the North Carolina landmarks layer is clipped to the Wake County layer using the Clip tool2 The result of this tool creates a landmarks point layer in Wake County.  The third tool uses the Buffer tool on the primary roads layer in North Carolina.  Within the Buffer tool parameters, a distance of 1 mile is chosen and a new polygon layer is created.


Finally, the Wake County landmarks layer is intersected with the buffered major roads layer to create a final output using the Interect tool.4  Using ModelBuilder has many benefits: you document the steps you used to create your project and you can easily rerun the tool with different inputs after the model is built.  ModelBuilder allows users to easily determine if and where problems in the workflow are.  When there is an error in the workflow, a “Failed to Execute” message will appear and tell users which tool was unable to execute.  ModelBuilder also lets users easily change parameters.  In the model used above, you could change the Expression in the Select Layer by Attribute tool from ‘Wake’ to ‘Durham’ and find landmarks within 1 mile of major roads in Durham County.

ArcGIS Open Data

What is Open Data?

Finding data can be challenging.  Organizations and government agencies can share their data with the public using ESRI’s ArcGIS Open Data, a centralized spatial data clearinghouse.  Since its inception last year, over 1,600 organizations have provided more than 22,000 open datasets to the public.  Open Data allows users to find and download data in different formats, including shapefiles, spreadsheets, and KML documents, as well as APIs (GeoJSON or Esri GeoServices) to call the data into your own application.  It also lets you create various types of charts.


How to Find and Use Data

Open Data allows consumers to type in a geographic area or a topic of interest in a single search box.  Once you’ve found data that appears to be what you were looking for, you can use the data for GIS purposes or use a table to create charts and graphs.  If you are looking for GIS data, you can preview the spatial data before downloading by clicking the “Open in ArcGIS” icon.  This takes users to ArcGIS Online where they can create choropleth maps and interact with the attribute table.   Users interested in tabular data can filter it and create various types of charts.  If more analysis of the data is necessary, you can download it by clicking the “Download Dataset” icon; you are able to download the entire dataset or the filtered dataset you’ve been working with.



The Source and Metadata links below the “About” heading provide information about the data.  In-depth information such as descriptions, attributes, OpenDataAboutand how the data was collected are provided in these links.  Below the name of the dataset there are three tabs:  “Details,” “Table,” and “Charts.”  Under the “Details” tab there are three sections, the Description, Dataset Attributes, and Related Datasets sections.  The Dataset Attributes section outlines the fields found within the dataset and provides field type information, while the Related Datasets section provides links to other datasets that have similar geographies or topics to the dataset you’ve chosen.  In the “Table” tab, you can view and filter the entire table in the dataset and the “Charts” tab allows you to create different charts.

OpenDataDetailTo obtain the most updated dataset or other updated articles related to the dataset, users should subscribe to the dataset they are interested in.  To subscribe, copy the link provided into an RSS Reader.  For specific data source questions, feel free to ask the Data and Visualization Department at askdata@duke.edu.

2015 Student Data Visualization Contest Winners

Our third year of the Duke Student Data Visualization Contest has come and gone, and we had another amazing group of submissions this year.  The 19 visualizations submitted covered a very broad range of subject matter and visualization styles. Especially notable this year was the increase in use of graphic design software like Illustrator, Photoshop, and Inkscape to customize the design of the submissions.  The winners and other submissions to the contest will soon be featured on the Duke Data Visualization Flickr Gallery.

As in the past, the submissions were judged on the basis of five criteria: insightfulness, broad appeal, aesthetics, technical merit, and novelty.  The three winning submissions this year exemplify all of these and tell rich stories about three very different types of research projects. The winners will be honored at a public reception on Friday, April 10, from 2:00 p.m. to 3:00 p.m, in the Brandaleone Lab for Data and Visualization Services (in the Edge).  They will each receive an Amazon gift card, and a poster version of the projects will be displayed in the lab.  We are very grateful to Duke University Libraries and the Sanford School of Public Policy for sponsoring this year’s contest.

First place:

Social Circles of Primary Caregivers / Tina Chen


Second place:

Crystal Structure of Human Proliferating Cell Nuclear Antigen (PCNA) for in silico Drug Screen / Yuqian Shi


Third place:

Deep and Extensive Impacts to Watershed Shape and Structure from Mountaintop Mining in West Virginia / Matthew Ross


Please join us in celebrating the outstanding work of these students, as well as the closing of the Places & Spaces: Mapping Science exhibit, on April 10 in the Edge.

DataFest 2015 @ the Edge

DataFest 2015Duke Libraries are happy to host the American Statistical Association’s Data Fest Competition the weekend of March 20-22nd.  In its fourth year at Duke, DataFest brings teams of students from across the Research Triangle to compete in a weekend long competition that stresses data cleaning, analytics, and visualization skills.   The Edge provides a central location for the competition with facilities designed for collaborative, data driven research.

While the deadline for forming DataFest teams has past, Data and Visualization Services and Duke’s Department of Statistical Sciences are happy to offer another opportunity to participate in DataFest.  Starting Monday, March 16th we are offering four workshops on data analytics and visualization in the four days leading up to the DataFest event.  All workshops are open to the public, but we strongly encourage early registration to ensure a seat. Please come join us as we get ready to celebrate ASA DataFest 2015.

DataFest Workshop Series

Monday, March 16th, 6:00-8:00 PM – Introduction to R

Tuesday, March 17th, 1:30-3:00 PM – Easy Interactive Charts and Maps with Tableau

Wednesday, March 18th,  6:00-8:00 PM – Data Munging with R and dplyr

Thursday, March 19th, 7:00-9:00 PM – Visualization in d3


Sharing Files: Your Duke Box.com

Last fall Duke University released its newest file sharing service known as Duke’s Box.  By partnering with Box.comBox.com Logo, Duke offers a cloud-storage service which is intuitive, secure, and easy to use. Login with with your NetID, share files with colleagues, and have confidence this cloud storage is compliant with all laws and regulations regarding data privacy and security.

Simple to Use

Duke’s Box is similar to other cloud-based file storage services which support collaboration, productivity, and synchronization.  You can drop and drag files, identify collaborators and set permissions (read, edit, comment, etc.) But unlike some services, such as Dropbox or Google Drive, Duke’s Box enables you to be in compliance with data privacy and security. Additionally, you can synchronize data across your devices, at your discretion and subject to Duke’s Security & Usage Practice restrictions

While you may have previously used OIT’s NAS (Network Attached Storage) file storage service known as CIFS for data storage,  Duke’s Box is easier to use -although it provides services for slightly different use-cases. For example, CIFS might be more useful if accessing large files (e.g. video files that are larger than 5 GB). However, CIFS doesn’t enable collaboration or sharing.  Depending on your needs you may still want to use your departmental or OIT NAS.  Either way, you can use both file storage services and each service is free.

Check out this 5 minute quick-start video:

50 GB of Space by Default

You are automatically provisioned 50 GB of space, but you can request more if you need more.  See the FAQ for details.

Individual file size limitations are throttled to less than 5 GB.  This means Duke’s Box may be less than ideal for sharing very large files. NAS services may be more appropriate for large files as the time to download or synchronize large files can become inconvenient.  But for many common file sharing cases, Duke’s Box is ideal, fast and convenient.

Documentation, Restrictions & Use

While you can store many types of files, there are best practices and restrictions you will want to review.  For example, Duke Medicine users are required to complete an online training module prior to account activation.

  • Security and Use, including more detail on Terms of Service, and example Data Types — including military and space data,  FERPA, HIPAA, etc.
  • Duke’s Box Usage Practices
  • OIT’s FAQ
  • Your Duke’s Box “Read Me” folder. duke box - readmeOIT has done a great job of providing quick and convenient documentation located right where you need it.  See the READ ME folder after you logon to Duke’s Box.

Sharing Your Data With Us

One of the many use-cases for Duke’s Box is a more convenient way for you to share your data with us.  As you know we welcome questions about data analysis and visualization. We know describing data can be difficult while sharing your dataset can clarify your question.   But sharing your data via email consumes a lot of resources — both yours and ours. Now there’s a better way; please share your data with us via Duke’s Box.

Steps for Sharing Your Data with DVS Consultants

How to Share your files - 5 second annimated loop

  1. Log into Duke’s Box  (Use the bluecontinuebutton) 
  2. Open your “homefolder
  3. Put your data in the “sharingfolder
  4. Use the “invite people” button (right-hand sidebar)
    • Using a consultant email address, invite the DVS Consultant to see your data.  (Don’t worry if you don’t have our email yet.  When you start your question at askData@duke.edu, an individual consultant will be back in touch.)