This essay was the second critique of the tidyverse
with rlang
-enabled written in the StackOverflow chat for R and addressed to Hadley Wickham, whom was visiting the chatroom discussing concerns that had arisen. His response can be found within the chatroom.
The prior essay can be read here: Woes of the rlang
-enabled Tidyverse.
@hadley, I’m aware that you disagree strongly here. I respect that you have a very differing opinion from myself. Furthermore, thank you for making available your tidyeval video and for emphasizing the existing resources to learn more about tidyeval. I took the opportunity to peruse the video earlier this morning. I would like to take a few moments to respond to some of your criticisms. I’ve organized my response into three parts: organic evolution of tidyeval design, designing for dplyr, and adoption rates. Before I begin, I do want to stress that again I highly respect the body of work you have produced; I just have severe reservations about the approachability of the latest mental model. My opinions are solely my opinions.
To begin, let’s visit the stated goal of the tidyverse:
An opinionated collection of R packages designed for data science.
All packages share an underlying design philosophy, grammar and data structures.
Regarding the design philosophy and grammar these came into being through an organic nature. That is, part of the current framing was based on contextual inquiry and mental user profiles based on your graduate appointment as an analyst in ISU’s consulting office.
However, the underlying theory was made on the fly as indicated by Lionel (c.f. SO Chat ). This is one reason we see drastic swings from year-to-year regarding tidyverse design and are forced to update existing code to still live in the ’verse. The partial cause for this is there still hasn’t been a tried and true document produced that explicitly describes the tidyverse design outside of tidy manifesto that ships with the tidyverse (c.f. SO Chat).
Again, I understand it takes time to write down such a philosophy, just look at how long it took for probability to receive an axiomatic system since its introduction in the middle ages. Moreover, the philosophy request comes while the framework is being built as its being used at scale (c.f. Youtube: “Building Planes in the Sky” ), but I can say that stability of definitions is crucial to adopting tidyverse.
Now, this isn’t completely a fair characterization as in the earlier days you did not manage a team at RStudio to build out the data science platform. However, going forward, such swings rest solely on you as you head the development team.
With this being said, I think over time your own mental models with R have evolved considerably. My worry here is the context switch being lost between an expert and a novice. My comment relating the “simplest possible design” with the current iteration the development team is taking arises in part due to my second point of dplyr evolution; but, I’ll end this remark by saying without novice user design stories the likelihood of groupthink done at an abstracted level appears to be more prevelant in this latest version.
Let’s switch gears to talking about a specific interface example within the DSLs designed. I’m going to first introduce code between dplyr <= 0.5.0 and dplyr >= 0.6.0 that shows a dynamic filter subset. Something that a lot of data scientists will likely need to incorporate into their code on a daily basis.
set.seed(1412)
= data.frame(
df x1 = sample(5, 10, replace = TRUE),
x2 = sample(5, 10, replace = TRUE)
)
library("dplyr")
Example old NSE:
= function(df, col_var = "x1", obs_val = 1) {
my_subset_old
# Two variables
# Use NSE interface
# Create an expression
%>%
df filter_(.dots = paste(col_var, "==", obs_val))
}
Example: New tidyeval
= function(df, col_var = "x1", obs_val = 1) {
my_subset_new
# Casts... Which one to use?
# Note sym is not made available by dplyr
= rlang::sym(col_var)
active_col = obs_val
active_val
# Check value
%>%
df filter((!!active_col) == active_val)
}
my_subset_new(df)
In the previous iteration, it was clear that the NSE interface for filter() was being chosen by function call and in pararmeter specification. It was further apparent that variables were variables. There was no “cast” to type X or cast to type Y and then inside the function uncast.
The cost under the new version places a significantly higher burden on users to understand what is happening compared to the simpler mental model that existed before of “variables can vary”. Thus, to what ends does this new approach benefit them? The benefit of this new approach is largely in retaining the ASTs that many users never needed to know about previously.
These users are now faced with the difficulty of visualizing and thinking in these highly abstracted states that piaget would argue is not ideal. It is for this reason that I stated the new iteration of the tidyverse design has shifted causal users to sophisticated package developers.
The average analyst, student, and collaborator is not a computer scientist. Their end goal here is to take data, analyze, and communicate results to ensure the appropriate data driven decisions are made. This was the selling premise of the tidyverse. The new approach forces them into a role that should be hidden from them.
To emphasize this, when was the first time you heard about the concept of an AST? Was it in a CS course or through independent study? I ask as a portion of my work largely revolves around differencing between similarly constructed ASTs. When I talk about my work, I focus the details more on the underlying model than how I’ve collected and processed the data as I can visually see the pain of processing a tree based model.
Lastly, regarding adoption rates in terms of downloads, I’m not sure that is an appropriate metric in this case. For two reasons:
- What is the dividing line between “mass scale” adoption and “forced” adoption? For example, knitr and rmarkdown have been established as de facto communication tools by the populous will vs. Sweave. The overall dependency structure of the tidyverse makes it hard to quantify the percent of pure downloads.
- Given your current stature in the #rstats community, one tweet will reach ~ > 58k. Thus, instantly publicizing a new package and/or framework will likely yield short-term adoption due to popularity alone. How long does this adoption last though? Are scripts being written in a generic sense or formulate to solve only one problem?
I would be more interested at this point in the number of either package converts or new packages that are maintained over the year being analyzed. I think that would give a clearer picture. The only issue presently is CRAN is slowly starting to have too many “development” packages being hosted on it instead of “GitHub”. But alas, I digress…