This Task View contains information about to use R and the world wide web together. The base version of R does not ship with many tools for interacting with the web. Thankfully, there are an increasingly large number of tools for interacting with the web. This task view focuses on packages for obtaining web-based data and information, frameworks for building web-based R applications, and online services that can be accessed from R. A list of available packages and functions is presented below, grouped by the type of activity. The Open Data Task View provides further discussion of online data sources that can be accessed from R.
Tools for Working with the Web from R
Core Tools For HTTP Requests
There are two packages that should cover most use cases of interacting with the web from R. httr provides a user-friendly interface for executing HTTP methods (GET, POST, PUT, HEAD, DELETE, etc.) and provides support for modern web authentication protocols (OAuth 1.0, OAuth 2.0). HTTP status codes are helpful for debugging HTTP calls. httr makes this easier using, for example, stop_for_status(), which gets the http status code from a response object, and stops the function if the call was not successful. (See also warn_for_status().) Note that you can pass in additional libcurl options to the config parameter in http calls. RCurl is a lower-level package that provides a closer interface between R and the libcurl C library, but is less user-friendly. It may be useful for operations on web-based XML or to perform FTP operations. For more specific situations, the following resources may be useful:
curl is another libcurl client that provides the curl() function as an SSL-compatible replacement for base R's url() and support for http 2.0, ssl (https, ftps), gzip, deflate and more. For websites serving insecure HTTP (i.e. using the "http" not "https" prefix), most R functions can extract data directly, including read.table and read.csv; this also applies to functions in add-on packages such as jsonlite::fromJSON and XML::parseXML. httpRequest is another low-level package for HTTP requests that implements the GET, POST and multipart POST verbs.
request (GitHub) provides a high-level package that is useful for developing other API client packages. httping (GitHub) provides simplified tools to ping and time HTTP requests, around httr calls. httpcache (GitHub) provides a mechanism for caching HTTP requests.
For dynamically generated webpages (i.e., those requiring user interaction to display results), rdom (not on CRAN) uses phantomjs to access a webpage's Document Object Model (DOM).
Another, higher-level alternative package useful for webscraping is rvest (GitHub), which is designed to work with magrittr to make it easy to express common web scraping tasks.
Many base R tools can be used to download web content, provided that the website does not use SSL (i.e., the URL does not have the "https" prefix). download.file() is a general purpose function that can be used to download a remote file. For SSL, the download() function in downloader wraps download.file(), and takes all the same arguments.
Tabular data sets (e.g., txt, csv, etc.) can be input using read.table(), read.csv(), and friends, again assuming that the files are not hosted via SSL. An alternative is to use httr::GET (or RCurl::getURL) to first read the file into R as a character vector before parsing with read.table(text=...), or you can download the file to a local directory. rio (GitHub) provides an import() function that can read a number of common data formats directly from an https:// URL. The repmis function source_data() can load and cache plain-text data from a URL (either http or https). That package also includes source_Dropbox() for downloading/caching plain-text data from non-public Dropbox folders and source_XlsxData() for downloading/caching Excel xlsx sheets.
Authentication: Using web resources can require authentication, either via API keys, OAuth, username:password combination, or via other means. Additionally, sometimes web resources that require authentication be in the header of an http call, which requires a little bit of extra work. API keys and username:password combos can be combined within a url for a call to a web resource (api key: http://api.foo.org/?key=yourkey; user/pass: http://username:firstname.lastname@example.org), or can be specified via commands in RCurl or httr. OAuth is the most complicated authentication process, and can be most easily done using httr. See the 6 demos within httr, three for OAuth 1.0 (linkedin, twitter, vimeo) and three for OAuth 2.0 (facebook, GitHub, google). ROAuth is a package that provides a separate R interface to OAuth. OAuth is easier to to do in httr, so start there. googleAuthR provides an OAuth 2.0 setup specifically for Google web services.
Parsing Structured Web Data
XML/HTML: There are two packages for working with XML: XML and xml2 (GitHub). Both support general XML (and HTML) parsing, including XPath queries. The package xml2 is less fully featured, but more user friendly with respect to memory management, classes (e.g., XML node vs. node set vs. document), and namespaces. Of the two, only the XML supports de novo creation of XML nodes and documents. The XML2R (GitHub) package is a collection of convenient functions for coercing XML into data frames. An alternative to XML is selectr, which parses CSS3 Selectors and translates them to XPath 1.0 expressions. XML package is often used for parsing xml and html, but selectr translates CSS selectors to XPath, so can use the CSS selectors instead of XPath. The selectorgadget browser extension can be used to identify page elements. RHTMLForms reads HTML documents and obtains a description of each of the forms it contains, along with the different elements and hidden fields. scrapeR provides additional tools for scraping data from HTML and XML documents. htmltab extracts structured information from HTML tables, similar to XML::readHTMLTable of the XML package, but automatically expands row and column spans in the header and body cells, and users are given more control over the identification of header and body rows which will end up in the R table.
RSS/Atom: feedeR (not on CRAN) can be used to parse RSS or Atom feeds.
swagger (not on CRAN) can be used to automatically generate functions for working with an web service API that provides documentation in Swagger.io format.
Tools for Working with URLs
The httr::parse_url() function can be used to extract portions of a URL. The RCurl::URLencode() and utils::URLencode() functions can be used to encode character strings for use in URLs. utils::URLdecode() decodes back to the original strings. urltools (GitHub) can also handle URL encoding, decoding, parsing, and parameter extraction.
iptools can facilitate working with IPv4 addresses, including for use in geolocation.
urlshorteneR (GitHub) offers URL expansion and analysis for Bit.ly, Goo.gl, and is.gd. longurl uses the longurl.org API to provide similar functionality.
gdns (not on CRAN) provides access to Google's secure HTTP-based DNS resolution service.
Tools for Working with Scraped Webpage Contents
Several packages can be used for parsing HTML documents. boilerpipeR provides generic extraction of main text content from HTML files; removal of ads, sidebars and headers using the boilerpipe Java library. RTidyHTML interfaces to the libtidy library for correcting HTML documents that are not well-formed. This library corrects common errors in HTML documents. W3CMarkupValidator provides an R Interface to W3C Markup Validation Services for validating HTML documents.
For XML documents, the XMLSchema package provides facilities in R for reading XML schema documents and processing them to create definitions for R classes and functions for converting XML nodes to instances of those classes. It provides the framework for meta-computing with XML schema in R. xslt is a package providing an interface to the xmlwrapp an XML processing library that provides an XSLT engine for transforming XML data using a transform stylesheet. (It can be seen as a modern replacement for Sxslt, which is an interface to Dan Veillard's libxslt translator, and the SXalan package.) This may be useful for webscraping, as well as transforming XML markup into another human- or machine-readable format (e.g., HTML, JSON, plain text, etc.). SSOAP provides a client-side SOAP (Simple Object Access Protocol) mechanism. It aims to provide a high-level interface to invoke SOAP methods provided by a SOAP server. XMLRPC provides an implementation of XML-RPC, a relatively simple remote procedure call mechanism that uses HTTP and XML. This can be used for communicating between processes on a single machine or for accessing Web services from within R.
Rcompression (not on CRAN): Interface to zlib and bzip2 libraries for performing in-memory compression and decompression in R. This is useful when receiving or sending contents to remote servers, e.g. Web services, HTTP requests via RCurl.
tm.plugin.webmining: Extensible text retrieval framework for news feeds in XML (RSS, ATOM) and JSON formats. Currently, the following feeds are implemented: Google Blog Search, Google Finance, Google News, NYTimes Article Search, Reuters News Feed, Yahoo Finance and Yahoo Inplay.
webshot uses PhantomJS to provide screenshots of web pages without a browser. It can be useful for testing websites (such as Shiny applications).
Other Useful Packages and Functions
Email:: mailR is an interface to Apache Commons Email to send emails from within R. sendmailR provides a simple SMTP client. gmailr provides access the Google's gmail.com RESTful API.
Web and Server Frameworks
DeployR Open is a server-based framework for integrating R into other applications via Web Services.
The shiny package makes it easy to build interactive web applications with R.
Other web frameworks include: fiery (GitHub) that is meant to be more flexible but less easy to use than shiny; prairie (not on CRAN) which is a lightweight web framework that uses magrittr-style syntax and is modeled after expressjs; rcloud (not on CRAN) which provides an iPython notebook-style web-based R interface; and Rook, which contains the specification and convenience software for building and running Rook applications.
The opencpu framework for embedded statistical computation and reproducible research exposes a web API interfacing R, LaTeX and Pandoc. This API is used for example to integrate statistical functionality into systems, share and execute scripts or reports on centralized servers, and build R based apps.
Several general purpose server/client frameworks for R exist. Rserve and RSclient provide server and client functionality for TCP/IP or local socket interfaces. httpuv provides a low-level socket and protocol support for handling HTTP and WebSocket requests directly within R. Another related package, perhaps which httpuv replaces, is websockets. servr provides a simple HTTP server to serve files under a given directory based on httpuv.
Several packages offer functionality for turning R code into a web API. jug is a simple API-builder web framework, built around httpuv. FastRWeb provides some basic infrastructure for this. plumber allows you to create a REST API by decorating existing R source code.
The WADL package provides tools to process Web Application Description Language (WADL) documents and to programmatically generate R functions to interface to the REST methods described in those WADL documents. (not on CRAN)
The RDCOMServer provides a mechanism to export R objects as (D)COM objects in Windows. It can be used along with the RDCOMClient package which provides user-level access from R to other COM servers. (not on CRAN)
rapporter.net provides an online environment (SaaS) to host and run rapport statistical report templates in the cloud.
CGIwithR (not on CRAN) allows one to use R scripts as CGI programs for generating dynamic Web content. HTML forms and other mechanisms to submit dynamic requests can be used to provide input to R scripts via the Web to create content that is determined within that R script.
Cloud Computing and Storage
Amazon Web Services is a popular, proprietary cloud service offering a suite of computing, storage, and infrastructure tools. aws.signature provides functionality for generating AWS API request signatures.
Simple Storage Service (S3) is a commercial server that allows one to store content and retrieve it from any machine connected to the Internet. RAmazonS3 and s3mpi (not on CRAN) provides basic infrastructure for communicating with S3. AWS.tools (GitHub) interacts with S3 and EC2 using the AWS command line interface (an external system dependency). The CRAN version is archived. awsConnect (not on CRAN) is another package using the AWS Command Line Interface to control EC2 and S3, which is only available for Linux and Mac OS.
Elastic Cloud Compute (EC2) is a cloud computing service. AWS.tools and awsConnect (not on CRAN) both use the AWS command line interface to control EC2. segue (not on CRAN) is another package for managing EC2 instances and S3 storage, which includes a parallel version of lapply() for the Elastic Map Reduce (EMR) engine called emrlapply(). It uses Hadoop Streaming on Amazon's EMR in order to get simple parallel computation.
DBREST: RAmazonDBREST provides an interface to Amazon's Simple DB API.
The cloudyr project, which is currently under active development on GitHub, aims to provide a unified interface to the full Amazon Web Services suite without the need for external system dependencies.
Cloud Storage: googleCloudStorageR interfaces with Google Cloud Storage. boxr (GitHub) is a lightweight, high-level interface for the box.com API. rDrop2 (GitHub; not on CRAN) is a Dropbox interface that provides access to a full suite of file operations, including dir/copy/move/delete operations, account information (including quotas) and the ability to upload and download files from any Dropbox account. backblazer (GitHub) provides access to the Backblaze B2 storage API.
Docker: analogsea is a general purpose client for the Digital Ocean v2 API. In addition, the package includes functions to install various R tools including base R, RStudio server, and more. There's an improving interface to interact with docker on your remote droplets via this package.
rrefine (not on CRAN) provides a client for the OpenRefine (formerly Google Refine) data cleaning service.
Document and Code Sharing
Code Sharing: gistr (GitHub) works with GitHub gists (gist.github.com) from R, allowing you to create new gists, update gists with new files, rename files, delete files, get and delete gists, star and un-star gists, fork gists, open a gist in your default browser, get embed code for a gist, list gist commits, and get rate limit information when authenticated. git2r provides bindings to the git version control system and rgithub (not on CRAN) provides access to the GitHub.com API, both of which can facilitate code or data sharing via GitHub. gitlabr is a GitLab-specific client.
Google Drive/Google Documents: driver (not on CRAN) is a thin client for the Google Drive API. The RGoogleDocs package is an example of using the RCurl and XML packages to quickly develop an interface to the Google Documents API. RGoogleStorage provides programmatic access to the Google Storage API. This allows R users to access and store data on Google's storage. We can upload and download content, create, list and delete folders/buckets, and set access control permissions on objects and buckets.
Google Sheets: googlesheets (GitHub) can access private or public Google Sheets by title, key, or URL. Extract data or edit data. Create, delete, rename, copy, upload, or download spreadsheets and worksheets. gsheet (GitHub) can download Google Sheets using just the sharing link. Spreadsheets can be downloaded as a data frame, or as plain text to parse manually.
imguR (GitHub) is a package to share plots using the image hosting service Imgur.com. knitr also has a function imgur_upload() to load images from literate programming documents.
rscribd (not on CRAN): API client for publishing documents to Scribd.
Data Analysis and Processing Services
Crowdsourcing: Amazon Mechanical Turk is a paid crowdsourcing platform that can be used to semi-automate tasks that are not easily automated. MTurkR (GitHub)) provides access to the Amazon Mechanical Turk Requester API. microworkers (not on CRAN) can distribute tasks and retrieve results for the Microworkers.com platform.
Geolocation/Geocoding: Several packages connect to geolocation/geocoding services. rgeolocate (GitHub) offers several online and offline tools. rydn (not on CRAN) is an interface to the Yahoo Developers network geolocation APIs, and ipapi (GitHub) can be used to geolocate IPv4/6 addresses and/or domain names using the ip-api.com API. threewords connects to the What3Words API, which represents every 3-meter by 3-meter square on earth as a three-word phrase. opencage (GitHub) provides access to to the OpenCage geocoding service. geoparser (GitHub) interfaces with the Geoparser.io web service to identify place names from plain text. nominatim (not on CRAN) connects to the OpenStreetMap Nominatim API for reverse geocoding. PostcodesioR (not on CRAN) provides post code lookup and geocoding for the United Kingdom.
Image Processing: RoogleVision (not on CRAN) links to the Google Cloud Vision image recognition service.
Twitter: twitteR provides an interface to the Twitter web API. RTwitterAPI (not on CRAN) and rtweet (not on CRAN) are other Twitter clients. twitterreport (not on CRAN) focuses on report generation based on Twitter data. streamR provides a series of functions that allow users to access Twitter's filter, sample, and user streams, and to parse the output into data frames. OAuth authentication is supported. tweet2r is an alternative implementation geared toward SQLite and postGIS databases. graphTweets produces a network graph from a data.frame of tweets. tweetscores (not on CRAN) implements a political ideology scaling measure for specified Twitter users.
Web Analytics Services
Google Trends: gtrendsR offers functions to perform and display Google Trends queries. Another GitHub package (rGtrends) is now deprecated, but supported a previous version of Google Trends and may still be useful for developers. RGoogleTrends provides another alternative.
Online Advertising: fbRads can manage Facebook ads via the Facebook Marketing API. RDoubleClick (not on CRAN) can retrieve data from Google's DoubleClick Campaign Manager Reporting API. RSmartlyIO (GitHub) loads Facebook and Instagram advertising data provided by Smartly.io.
Other services: RSiteCatalyst has functions for accessing the Adobe Analytics (Omniture SiteCatalyst) Reporting API.
Push Notifications: RPushbullet provides an easy-to-use interface for the Pushbullet service which provides fast and efficient notifications between computers, phones and tablets. pushoverr (GitHub) can sending push notifications to mobile devices (iOS and Android) and desktop using Pushover.
Reference/bibliography/citation management: RefManageR imports and manage BibTeX and BibLaTeX references with RefManager. RMendeley: Implementation of the Mendeley API in R. Archived on CRAN. It's been archived on CRAN temporarily until it is updated for the new Mendeley API. rmetadata (not on CRAN) can get scholarly metadata from around the web. rorcid (GitHub) is a programmatic interface the Orcid.org API, which can be used for identifying scientific authors and their publications (e.g., by DOI). rplos is a programmatic interface to the Web Service methods provided by the Public Library of Science journals for search. rpubmed (not on CRAN) provides tools for extracting and processing Pubmed and Pubmed Central records, and europepmc (GitHub) connects to the Europe PubMed Central service. scholar provides functions to extract citation data from Google Scholar. Convenience functions are also provided for comparing multiple scholars and predicting future h-index values. pubmed.mineR is a package for text mining of PubMed Abstracts that supports fetching text and XML from PubMed. rdatacite (GitHub) connects to DataCite. oai (GitHub) and OAIHarvester harvest metadata using the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) standard. JSTORr (Not on CRAN) provides simple text mining of journal articles from JSTOR's Data for Research service. aRxiv (GitHub) is a client for the arXiv API, a repository of electronic preprints for computer science, mathematics, physics, quantitative biology, quantitative finance, and statistics.
GFusionTables (not on CRAN): An interface to Google Fusion Tables. Google Fusion Tables is a data management system in the cloud. This package provides functions to browse Fusion Tables catalog, retrieve data from Gusion Tables dtd storage to R and to upload data from R to Fusion Tables
jSonarR: Enables users to access MongoDB by running queries and returning their results in data.frames. jSonarR uses data processing and conversion capabilities in the jSonar Analytics Platform and the JSON Studio Gateway, to convert JSON to a tabular format.
Rbitcoin allows both public and private API calls to interact with Bitcoin. rbitcoinchartsapi is a package for the BitCoinCharts.com API. From their website: "Bitcoincharts provides financial and technical data related to the Bitcoin network and this data can be accessed via a JSON application programming interface (API).".
rerddap (GitHub; not on CRAN): A generic R client to interact with any ERDDAP instance, which is a special case of OPeNDAP (https://en.wikipedia.org/wiki/OPeNDAP), or Open-source Project for a Network Data Access Protocol. Allows user to swap out the base URL to use any ERDDAP instance.
ripplerestr provides an interface to the Ripple protocol for making financial transactions.