Minding Your Business: Locating Company and Industry Data

The Data and Visualization Services (DVS) Department can help you locate and extract many types of data, including data about companies and industries.  These may include data on firm location, aggregated data on the general business climate and conditions, or specific company financials.  In addition to some freely available resources, Duke subscribes to a host of databases providing business data.

Directories of Business Locations

You may need to identify local outlets and single-location companies that sell a particular product or provide a particular service.  You may also need information on small businesses (e.g., sole proprietorships) and private companies, not just publicly traded corporations or contact information for a company’s headquarters.  A couple of good sources for such local data are the ReferenceUSA Businesses Database and SimplyAnalytics.

From these databases, you can extract lists of locations with geographic coordinates for plotting in GIS software, and SimplyAnalytics also lets you download data already formatted as GIS layers. Researchers often use this data when needing to associate business locations with the demographics and socio-economic characteristics of neighborhoods (e.g., is there a lack of full-service grocery stores in poor neighborhoods?).

SimplyAnalytics
SimplyAnalytics

When searching these resources (or any business data source), it often helps to use an industry classification code to focus your search. Examples are the North American Industry Classification System (NAICS) and the Standard Industrial Classification (SIC) (no longer revised, but still commonly used). You can determine a code using a keyword search or drilling down through a hierarchy.

Aggregated Business and Marketing Data

Government surveys ask questions of businesses or samples of businesses. The data is aggregated by industry, location, size of company, and other criteria and typically include information on the characteristics of each industry, such as employment, wages, and productivity.

Sample Government Resources

Macroeconomic indicators relate to the overall business climate, and a good source for macro data is Global Financial Data. Its data series includes many stock exchange and bond indexes from around the world.

Private firms also collect market research data through sample surveys. These are often from a consumer perspective, for instance to help gauge demand for specific products and services. Be aware that the numbers for small geographies (e.g., Census Tracts or Block Groups) are typically imputed from small nationwide samples, based on correlations with demographic and socioeconomic indicators. Examples of resources with such data are SimplyAnalytics (with data from EASI and Simmons) and Statista (mostly national-level data).

Firm-Level Data

You may be interested in comparing numbers between companies, ranking them based on certain indicators, or gathering time-series data on a company to follow changes over time.  Always be aware of whether the company is a publicly traded corporation or is privately held, as the data sources and availability of information may vary.

For firm-level financial detail, public corporations traded in the US are required to submit data to the U.S. Securities and Exchange Commission (SEC).

EDGAR
SEC’s EDGAR Service

Their EDGAR service is the source of the corporate financials repackaged by commercial data providers, and you might find additional context and narrative analysis with products such as Mergent Online, Thomson One, or S&P Global NetAdvantage.  The Bloomberg Professional Service in the DVS computer lab contains a vast amount of data, news, and analysis on firms and economic conditions worldwide. You can find many more sources for firm- and industry-specific data from the library’s guide on Company and Industry Research, and of course at the Ford Library at the Fuqua School of Business.

All of these sources provide tabular download options.

For help finding any sort of business or industry data, don’t hesitate to contact us at askdata@duke.edu.

Where can I find data (or statistics) on ___________?

Helping Duke students, staff and faculty to locate data is something that we in Data and Visualization Services often do.  In this blog post I will walk you through a sample search and share some tips that I use when I search for data and statistics.

“Hi there, I am looking for motorcycle registration numbers and sales volumes by age and sex for the United States.”

BREAKING DOWN THE QUESTION:

There are two types of data needed: motorcycle registration data and motorcycle sales data. There are two criteria that the data should be differentiated by: owner’s age and owner’s gender.
There is a geographic component: United States.

One criteria that is not given is time.  When a time frame isn’t provided, I assume that what is needed is the most current data available.  Something to consider is that “current” often will still be a year or more old. It takes time for data to be gathered, cleaned and published.

***Pro-tip: When you are looking for data consider who/what/when and where – adding in those components makes it easier to construct your search.***

WHERE AND HOW DO I SEARCH?

If I do not immediately have a source in mind (and sometimes even if I do, just to hit all the bases) I will use Google and structure my search as follows: motorcycle sales and registration by age and gender united states.

***Pro-tip: You can use Google (or search engine of your choice) to search across things we subscribe to and the open Web, but you will need to be connected via a Duke IP address***

EVALUATING RESULTS

One of the first results returned is from a database we subscribe to called Statistia. This source gives me the number of motorcycle owners by age in 2018, which answers part of the question, but does not include sales information or gender breakdown.

Another top result is a report on Motorcycle Trends in the United States from the Bureau of Transportation Statistics (BTS). Unfortunately, the report is from 2009 and the data cited in the article are from 2003-2007.  A search of the BTS site does not yield any thing more current. However, when I check the source list at the bottom of the report, there are several listed that I will check directly once I’ve finished looking through my search results.

***Pro-tip: Always look for sources of data in reports and figures, even if the data are old. Heading to the source can often yield more current information.***

A third result that looks promising is from a motorcycling magazine: Motorcycle Statistics in America: Demographics Change for 2018. The article reports on statistics from the 2018 owner surveys conducted by the Motorcycle Industry Council (which is one of the sources that the Bureau of Transportation report  listed). This article provides the percent of males and females that own motorcycles as well as the median age of motorcycle owners.  While this is pretty close to the data needed, it is worthwhile to look into the Motorcycle Industry Council. Experience has taught me, however, that industry data typically is neither open nor freely available.

CHECKING THE COMMON SOURCE

When I go to the Motorcycle Industry Council (MIC) Web site I find that they do, indeed, have a statistical report that comes out every year which gives a comprehensive overview of the motorcycle industry.  If you are not a member, you can buy a copy of the report, but it is expensive (nearly $500).

***Pro-tip: Always check the original source even if you anticipate that there may be a paywall – it’s a good idea to evaluate all sources to ensure that they are credible and authoritative.***

MAKING A DECISION

In this instance, I would ultimately advise the person to use the statistics reported in the article Motorcycle Statistics in America: Demographics Change for 2018. Secondary sources aren’t ideal, and can sometimes be complicated to cite, but when you can’t get access to the primary source and that primary source is the authority, it is your best bet.

***Pro-tip: If you are using a secondary source, you should name the original source in text. For example: Data from the 2018 Motorcycle Industry Council Owner Survey (as cited by Ultimate Motorcycling, 2019) but include a citation to the secondary source in your reference list according to the formatting of the style you are using. 

PARTING THOUGHTS

In closing, the data you want might not always be the data you use – either due to the data being proprietary, restricted, or perhaps just doesn’t exist or doesn’t exist in the form you need and/or are able to use.  When this happens, take a moment to think on your research question and determine if you have the time and the resources needed to continue pursuing your question as it stands (purchasing, requesting, applying for, or collecting your own data), or if you need to broaden or change your focus to incorporate the resources you do find in a meaningful way.

OSF@Duke: By the Numbers and Beyond

The Open Science Framework (OSF) is a data and project management platform developed by the Center for Open Science that is designed to support the entire research lifecycle. OSF has a variety of features including file management and versioning, integration with third-party tools, granular permissions and sharing capabilities, and communication functionalities. It also supports growing scholarly communication formats including preprints and preregistrations, which enable more open and reproducible research practices.

In early 2017, Duke University became a partner institution with the OSF. As a partner institution, Duke researchers can sign into the OSF using their NetID and affiliate a project with Duke, which allows it to be displayed on the Duke OSF page. After 2 years of supporting OSF for Institutions here at Duke, the Research Data Management (RDM) team wanted to gain a better perspective surrounding how our community was using the tool and their perceptions. 

As of March 10, 2019, Duke has 202 users that have signed into the system using their Duke credentials (and there are possibly more users that are authenticating using personal email accounts). Of these users, 177 total projects have been created and affiliated with Duke. Forty-six of these projects are public and 132 remain private. Duke users have also registered 80 Duke affiliated projects, 62 of which are public and 18 are embargoed. A registration is a time-stamped read-only copy of an OSF project that can be used to preregister a research design, to create registered reports for journals, or at the conclusion of a project to formally record the authoritative copy of materials.

But what do OSF users think of the tool and how are they using it within their workflows? A few power users shared their thoughts:

Optimizing research workflows: A number of researchers noted how the OSF has helped streamline their workflows through creating a “central place that everyone has access to.” OSF has helped “keeping track of the ‘right’ version of things” and “bypassing the situation of having different versioned documents in different places.” Additionally, the OSF has supported “documenting workflow pipelines.”

Facilitating collaboration: One of the key features of the OSF is that researchers, regardless of institutional affiliation, can contribute to a project and integrate the tools they already use. Matt Makel, Director of Research at TIP, explains how OSF supports his research – “I collaborate with many colleagues at other institutions. OSF solves the problem of negotiating which tools to use to share documents. Rather than switching platforms across (or worse, within) projects, OSF is a great hub for our productivity.”

Offering an end-to-end data management solution: Some research groups are also using OSF in multiple stages of their projects and for multiple purposes. As one researcher expressed – “My research group uses OSF for every project. That includes preregistration and archiving research materials, data, data management and analysis syntax, and supplemental materials associated with publications. We also use it to post preprints to PsyArXiv.”

It also surfaced that OSF supported an ideological perception regarding a shift in the norms of scholarly communication. As Elika Bergelson, Crandall Family Assistant Professor in Psychology and Neuroscience, aptly put it “Open science is the way of the future.” Here within Duke University Libraries, we aim to continue to support these shifting norms and the growing benefits of openness through services, platforms, and training.

To learn more about how the OSF might support your research, join us on April 3 from 10-11 am for hands-on OSF workshop. Register here: https://duke.libcal.com/event/4803444

If you have other questions about using the OSF in a project, the RDM team is available for consultations or targeted demonstrations or trainings for research teams. We also have an OSF project that can help you understand the basic features of the tool.

Contact askdata@duke.edu to learn more or request an OSF demonstration.

Telling Stories with Maps: Esri Story Maps at Duke

Developing interactive maps that incorporate text, images, video, and audio can be time-consuming and require specialized technical skills. Fortunately, at Duke we have access to Esri Story Maps, a web-based tool that helps you quickly design engaging narratives around your maps, no coding required.

We have seen a variety of creative uses of Story Maps at Duke, including:

  • Presentations to communicate research
  • Student assignments, as an alternative to a midterm or paper
  • Tours and guides of campus
  • Tutorials to explain a topic with a spatial component
  • Portfolios to showcase projects that include maps

Krakow Story MapChristine Liu, a graduate student in the Department of Art, Art History & Visual Studies, created this Story Map to illustrate a journey through Kraków under Nazi occupation.

If you are interested in building a Story Map, we recommend first spending some time exploring Esri’s curated gallery of stories to find inspiration and understand the platform’s capabilities. You can also review their collection of resources, which includes training videos, FAQs, and useful advice.

When you are ready to get started,  you can contact one of our GIS specialists (by emailing askdata@duke.edu) to schedule an appointment. We are always happy to answer questions and provide recommendations specific to your project. We also offer workshops to guide you through the process of building a basic online map, making it visually effective, and combining it with other materials to publish a Story Map.

If you already have a Story Map you want to show off, please share it with us! We are assembling a gallery of stories made at Duke and would love to feature your project.

Computational Reproducibility Pilot – Code Ocean Trial

A goal of Duke University Libraries (DUL) Code Ocean Logois to support the  growing and changing needs of the Duke research community. This can take many forms. Within Data and Visualization Services, we provide learning opportunities, consulting services, and computational resources to help Duke researchers implement their data-driven research projects. Monitoring and assessing new tools and platforms also helps DUL stay in tune with changing research norms and practices. Today the increasing focus on the importance of transparency and reproducibility has resulted in the development of new tools  and resources to help researchers produce and share more reproducible results. One such tool is Code Ocean.

Code Ocean is a computational reproducibility platform that employs Docker technology to execute code in the cloud. The platform does two key things—it integrates the metadata, code, data and dependencies into a single ‘compute capsule’, ensuring that the code will run—and it does this in a single web interface that displays all inputs and results. Within the platform, it is possible to develop, edit or download the code, run routines, and visualize, save or download output, all from a personal computer. Users or reviewers can upload their own data and test the effects of changing parameters or modification of the code. Users can also share their data and code through the platform. Code Ocean provides a DOI for all capsules facilitating attribution and a permanent connection to any published work.

In order to help us understand and evaluate the usefulness of the Code Ocean platform to the Duke research community, DUL will be offering trial access to the Code Ocean cloud-based computational reproducibility platform starting on October 1, 2018. To learn more about what is included in the trial access and to sign up to participate, visit the Code Ocean pilot portal page.

If you have any questions, contact askdata@duke.edu.

Expanding Support for Data Visualization in Duke Libraries

Angela ZossOver the last six years, Data and Visualization Services (DVS) has expanded support for data visualization in the Duke community under the expert guidance of Angela Zoss. In this period, Angela developed Duke University Libraries’ visualization program through a combination of thoughtful consultations, training, and events that expanded the community of data visualization practice at Duke while simultaneously increasing the impact of Duke research.

As of May 1st, Duke Libraries is happy to announce that Angela will expand her role in promoting data visualization in the Duke community by transitioning to a new position in the library’s Assessment and User Experience department. In her new role, Angela will support a larger effort in Duke Libraries to increase data-driven decision making. In Data and Visualization Services, Eric Monson will take the lead on research consultation and training for data visualization in the Duke community. Eric, who has been a data visualization analyst with DVS since 2015 and has a long history of supporting data visualization at Duke, will serve as DVS’ primary contact for data visualization.

DVS wishes Angela success in her new position. We look forward to continuing to work with the Duke community to expand data visualization research on campus.

Using Tableau with Qualtrics data at Duke

Logos for Qualtrics and TableauThe end of the spring semester always brings presentations of final projects, some of which may have been in the works since the fall or even the summer. Tableau, a software application designed specially for visualization, is a great option for projects that would benefit from interactive charts and maps.

Visualizing survey data, however, can be a bit of a pain. If your project uses Qualtrics, for example, you may be having trouble getting the data ready for visualization and analysis. Qualtrics is an extremely powerful survey tool, but the data it creates can be very complicated, and typical data analysis tools aren’t designed to handle that complexity.

Luckily, here at Duke, Tableau users can use Tableau’s Web Data Connector to pull Quatrics data directly into Tableau! It’s so easy, you may never analyze your Qualtrics data another way again.

Process

Here are the basics. There are also instructions from Qualtrics.

In Qualtrics: Copy your survey URLScreenshot of Tableau URL in Qualtrics

  • Go to your Duke Qualtrics account
  • Click on the survey of interest
  • Click on the Data & Analysis tab at the top
  • Click on the Export & Import button
  • Select Export Data
  • Click on Tableau
  • Copy the URL

In Tableau (Public or Desktop): Paste your survey URL

Tableau Web Data Connection

  • Under Connect, click on Web Data Connector (may be under “More…” for Tableau Public or “To a server… More…” for Tableau Desktop)
  • Paste the survey URL into the web data connector URL box and hit enter/return
  • When a login screen appears, click the tiny “Api Token Login” link, which should be below the green Log in button

In Qualtrics: Create and copy your API token

Generate Qualtrics API Token

  • Go to your Duke Qualtrics account
  • Click on your account icon in the upper-right corner
  • Select Account Settings…
  • On the Account Settings page, click on the Qualtrics IDs tab
  • Under API, check for a token. If you don’t have one yet, click on Generate Token
  • Copy your token

In Tableau (Public or Desktop): Paste your API token

  • Paste in your API token and click the Login button
  • Select the data fields you would like to import

Note: there is an option to “transpose” some of the fields on import. This is useful for many of the types of visualizations you might want to create from survey data. Typically, you want to transpose fields that represent the questions asked in the survey, but you may not want to transpose demographics data or identifiers. See also the Qualtrics tips on transposing data.

Resources

For more tips on how to use Tableau with Qualtrics data, check out the resources below:

Can’t we just make a Venn diagram?

When I’m teaching effective visualization principles, one of the most instructive processes is critiquing published visualizations and reviewing reworks done by professionals. I’ll often show examples from Cole Nussbaumer Knaflic’s blog, Storytelling with Data, and Jon Schwabish’s blog, The Why Axis. Both brilliant! (Also, check out my new favorite blog Uncharted, by Lisa Charlotte Rost, for wonderful visualization discussions and advice!)

What we don’t usually get to see is the progression of an individual visualization throughout the design process, from data through rough drafts to final product. I thought it might be instructive to walk through an example from one of my recent consults. Some of the details have been changed because the work is unpublished and the jargon doesn’t help the story.

Data full of hits and misses

Five tests data, hits and misses per patient A researcher came to me for help with an academic paper figure. He and his collaborator were comparing five literature-accepted methods for identifying which patients might have a certain common disease from their medical records. Out of 60,000 patients, about 10% showed a match with at least one of the tests. The resulting data was a spreadsheet with a column of patient IDs, and five columns of tests, with a one if a patient was identified as having the disease by that particular test, and a zero if their records didn’t match for that test. As you can see in the figure, there were many inconsistencies between who seemed to have the disease across the five tests!

So you want to build a snowman

Five tests overlap, original Venn diagram The researchers wanted a visualization to represent the similarities and differences between the test results. Specifically, they wanted to make a Venn diagram, which consists of ellipsoids representing overlapping sets. They had an example they’d drawn by hand, but wanted help making it into an accurate depiction of their data. I resisted, explaining that I didn’t know of a program that would accomplish what he wanted, and that it is likely to be mathematically impossible to take their five-dimensional data set and represent it quantitatively as a Venn diagram in 2D. Basically, you can’t get the areas of all of the overlapping regions to be properly proportional to the number of patients that had hits on all of the combinations of the five tests. The Venn diagram works fine schematically, as a way to get across an idea of set overlap, but it would never be a data visualization that would reveal quantitative patterns from their findings. At worst, it would be a misleading distortion of their results.

Count me in

Five tests data pairwise table with colored cells His other idea was to show the results as a table of numbers in the form of a matrix. Each of the five tests were listed across the top and the side, and the cell contents showed the quantity of patients who matched on that pair of tests. The number matching on a single test was listed on the diagonal. Those patterns can be made more visual by coloring the cells with “conditional formatting” in Excel, but the main problem with the table is that it hides a bunch of interesting data! We don’t see any of the numbers for people who hit on the various combinations of three tests, or four, or the ones that hit on all five.

Five test data heatmap and number of tests hit per patient

I suggested we start exploring the hit combinations by creating a heatmap of the original data, but sort the patients (along the horizontal axis) by how many tests tests they hit (listing the tests up the vertical axis). Black circles are overlaid showing the number of tests hit for any given patient.

There are too many patients / lines here to show clearly the combinations of tests, but this visualization already illuminated two things that made sense to the researchers. First, there is a lot of overlap between ALPHA (a) and BETA-based (b) tests, and between GAMMA Method (c) and Modified GAMMA (d), because these test pairs are variations of each other. Second, the overlaps indicate a way the definitions are logically embedded in each other; (a) is a subset of (b), and (b) is for the most part a subset of (c).

Five tests combinations Tableau bubble plot

My other initial idea was to show the numbers of patients identified in each of the test overlaps as bubbles in Tableau. Here I continue the shorthand of labeling each test by the letters [a,b,c,d,e], ordered from ALPHA to OMEGA. The number of tests hit are both separated in space and encoded in the color (low to high = light to dark).

Add some (effective?) dimensions

I felt the weakness of this representation was that the bubbles were not spatially associated with their corresponding tests. Inspired by multi-dimensional radial layouts such as those used in the Stanford dissertation browser, I created a chart (in Adobe Illustrator) with five axes for the tests. I wanted each bubble to “feel a pull” from each of the passed tests, so it made sense to place the five-hit “abcde” bubble at the center, and each individual, “_b___”, “__c__”, ____e” bubble right by its letter – making the radius naturally correspond to the number of test hit. Other bubbles were placed (manually) in between their combination of axes / tests.

Five test combinations hits polar bubble plot

The researchers liked this version. It was eye-catching, and the gravitation of bubbles in the b/c quadrant vaguely illustrated the pattern of hits and known test subsets. One criticism, though, was that it was a bit confusing – it wasn’t obvious how the bubbles were placed around the circles, and it might take people too long to figure out how to read the plot. It also, took up a lot of page space.

Give these sets a hand

One of the collaborators, after seeing this representation, suggested trying to use it as the basis for an Euler diagram. Like a Venn diagram, it’s a visualization used to show set inclusion and overlap, but unlike in a Venn, an Euler is drawn using arbitrary shapes surrounding existing labels or images representing the set members. I thought it was an interesting idea, but I initially dismissed the idea as too difficult. I had already put more time than I typically spend on a consult into this visualization (our service model is to help people learn how to make their own visualizations, not produce visualizations for them). Also, I had never made an Euler diagram. While I had seen some good talks about them, I didn’t have any software on hand which would automate the process. So, I responded that the researchers should feel free to try creating curves around the sets themselves, but I wasn’t interested in pursuing it further.

Five tests polar bubbles with hand-drawn set boundaries About two minutes after I sent the email, I began looking at the diagram and wondering if I could draw the sets! I printed out a black and white copy and started drawing lines with colored pencils, making one enclosing shape for each test [a-e]. It turned out that my manual layout resulted in fairly compact curves, except for “_bc_e”, which had ambiguous positioning, anyway. Five tests first draft Euler diagram The curve drawing was so easy that I started an Illustrator version. I kept the circles’ area the same (corresponding to their quantitative value) but pushed them around to make the set shapes more compact.

Ironically, I had come back almost exactly to the researchers’ original idea! The important distinction is that the bubbles keep it quantitative, with the regions only representing set overlap.

We’ve come full ellipsoid

Angela Zoss constructively pointed out that there were now too many colors, and the shades encoding number of hits wasn’t necessary. She also felt the region labels weren’t clear. Those fixes, plus some curve smoothing (Path -> Simplify in Illustrator) led me to a final version we were all very happy with!

It’s still not a super simple visualization, but both the quantitative and set overlap patterns are reasonably clear. This results was only possible, though, through trying multiple representations and getting feedback on each!

Five tests final quantitative bubble Euler diagram

If you’re interested in learning how to create visualizations like this yourself, sign up for the DVS announcements listserve, or keep an eye on our upcoming workshops list. We also have videos of many past workshops, including Angela’s Intro to Effective Data Visualization, and my Intro to Tableau, Illustrator for Charts, and Illustrator for Diagrams.

Love Data Week (Feb. 12-16)

Love Data Week is here again! Love Data Week is an international social media campaign to raise awareness and build community to engage on topics related to research data management, sharing, preservation, reuse, and library-based research data services.

This year the theme for Love Data Week is Data Stories, with a focus on four topics:

  • Stories about data
  • Telling stories with data
  • Connected conversations across different data communities
  • We are data: seeing the people behind the data

Since last year, we have some new data stories at Duke Libraries. The Duke Digital Repository now contains nearly 30 data sets that Duke researchers have shared for preservation and reuse. There are 23 Duke-affiliated projects in the Open Science Framework, a free web app developed by the Center for Open Science that facilitates good project and data management practices. And our Research Data Management team has continued to offer consultation and instruction services to a variety of researchers on campus.

We invite you to join the data story by attending data related events coming up at Duke during Love Data Week:

  • Tuesday, February 13:

    Civil and Environmental Engineering Seminar: Data, data everywhere … Making sense of observations and models across scales 
  • Wednesday, February 14:

    Introduction to the Open Science Framework*: Learn how this free, open source tool can help you manage and share your research data.

    Story Maps with ArcGIS Online: Learn how to tell your data story with an interactive map that integrates other media (photos, text, videos) and shows changes over space and time.

    Qualitative Data Analysis Workshop at the Social Science Research Institute: Learn how to transform interview scripts into analyzable data and other foundational skills in qualitative data analysis. 
  • Thursday, February 15:

    Data Dialogue at the Information Initiative at Duke: Design intuition, ethnography, and data science 
  • Friday, February 16:

    Visualization Friday Forum: Invisible Visualization: Making data visualizations accessible to the blind and other people with disabilities

*In honor of Love Data Week, chocolate will be offered at this event.

Keep an eye on additional workshops coming up for the rest of the spring semester!

Contact us at askdata@duke.edu for help with your data story, and follow the conversation at #lovedata18

All promotional Love Data 2018 materials used under a Creative Commons Attribution 4.0 International License.

Citation: Nurnberger, A., Coates, H. L., Condon, P., Koshoffer, A. E., Doty, J., Zilinski, L., … Foster, E. D. (2017). Love Data Week [image files]. Retrieved from https://osf.io/r8tht

Announcing Tidyverse workshops for Winter 2018

Coming this winter the Data & Visualization Services Department will once again host a workshop series on the R programming language. Our spring offering is modeled on our well received R we having fun yet‽ (Rfun) fall workshop series. The four-part series will introduce R as a language for modern data manipulation by highlighting a set of tidyverse packages that enable functional data science. We will approach R using the free RStudio IDE, an intent to make reproducible literate code, and a bias towards the tidyverse. We believe this open tool-set provides a context that enables and reinforces reproducible workflows, analysis, and reporting.

This six-part series will introduce R as a language for modern data manipulation by highlighting a set of tidyverse packages that enable functional data science.

January Line-up

Title Date Registration Past Workshop
Intro to R Jan 19
1 – 3pm
register Resources
R Markdown
with Dr. Çetinkaya-Rundel
Jan 23
9am
register
Shiny
with Dr. Çetinkaya-Rundel
Jan 25
9am
register
Mapping with R Jan 25
1-3pm
register Resources
Reproducibility & Git Jan 29
1-3pm
register Resources
Visualizationg with ggplot2 Feb 1
9:30-11:30am
register Resources

An official announcement with links to registration is forthcoming. Feel free to subscribe to the Rfun or DVS-Announce lists. Or look to the DVS Workshop page for official registration links as soon as they are available.

Workshop Arrangement

This workshop series is intended to be iterative and recursive. We recommend starting with the Introduction to R. Proceed through the remaining three workshops in any order of interest.

Recordings and Past Workshops

We presented a similar version of this workshop series last fall and recorded each session whenever possible. You can stream past workshops and engage with the shareable data sets at your-own-pace (see the Past Workshop resources links, above.) Alternatively, all the past workshop resource links are presented in one listicle: Rfun recap.