Skip to contents

The goal of cransays is to scrape the CRAN incoming ftp folder to find where each of the submission is, and to make a dashboard.

Installation

remotes::install_github("r-hub/cransays")

Example

This is a basic example :

cran_incoming <- cransays::take_snapshot()

The vignette produces a handy dashboard that we update every hour via GitHub Actions.

Historical data

Hourly snapshots of the ftp server are saved in the history branch, as part of our rendering workflow. A short script to load this historical data as a data.frame is also provided in the package:

historical_data <- cransays::download_history()
  • Code originally adapted from https://github.com/edgararuiz/cran-stages That repository features an interesting analysis of CRAN incoming folder, including a diagram of the process deducted from that analysis.

  • The foghorn package, to summarize CRAN Check Results in the Terminal, provides an foghorn::cran_incoming() function to where your package stands in the CRAN incoming queue.

  • The cransubs website provides a similar dashboard by taking a completely different technical approach. Instead of downloading the queue data and rendering the dashboard as a static site, it fetches the data on the fly and as needed. It is particularly well suited if you wish more regular updates than the hourly schedule of cransays, but it doesn’t provide snapshots of historical data.

  • If you wanna increase the chance of a smooth submission, check out this collaborative list of things to know before submitting to CRAN.

Contributing

Wanna report a bug or suggest a feature? Great stuff! For more information on how to contribute check out our contributing guide.

Please note that this R package is released with a Contributor Code of Conduct. By participating in this package project you agree to abide by its terms.