forked from hadley/r4ds
-
Notifications
You must be signed in to change notification settings - Fork 0
/
iteration.qmd
1017 lines (778 loc) · 34.4 KB
/
iteration.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
# Iteration {#sec-iteration}
```{r}
#| echo: false
source("_common.R")
```
## Introduction
In this chapter, you'll learn tools for iteration, repeatedly performing the same action on different objects.
Iteration in R generally tends to look rather different from other programming languages because so much of it is implicit and we get it for free.
For example, if you want to double a numeric vector `x` in R, you can just write `2 * x`.
In most other languages, you'd need to explicitly double each element of `x` using some sort of for loop.
This book has already given you a small but powerful number of tools that perform the same action for multiple "things":
- `facet_wrap()` and `facet_grid()` draws a plot for each subset.
- `group_by()` plus `summarize()` computes summary statistics for each subset.
- `unnest_wider()` and `unnest_longer()` create new rows and columns for each element of a list-column.
Now it's time to learn some more general tools, often called **functional programming** tools because they are built around functions that take other functions as inputs.
Learning functional programming can easily veer into the abstract, but in this chapter we'll keep things concrete by focusing on three common tasks: modifying multiple columns, reading multiple files, and saving multiple objects.
### Prerequisites
In this chapter, we'll focus on tools provided by dplyr and purrr, both core members of the tidyverse.
You've seen dplyr before, but [purrr](http://purrr.tidyverse.org/) is new.
We're just going to use a couple of purrr functions in this chapter, but it's a great package to explore as you improve your programming skills.
```{r}
#| label: setup
#| message: false
library(tidyverse)
```
## Modifying multiple columns {#sec-across}
Imagine you have this simple tibble and you want to count the number of observations and compute the median of every column.
```{r}
df <- tibble(
a = rnorm(10),
b = rnorm(10),
c = rnorm(10),
d = rnorm(10)
)
```
You could do it with copy-and-paste:
```{r}
df |> summarize(
n = n(),
a = median(a),
b = median(b),
c = median(c),
d = median(d),
)
```
That breaks our rule of thumb to never copy and paste more than twice, and you can imagine that this will get very tedious if you have tens or even hundreds of columns.
Instead, you can use `across()`:
```{r}
df |> summarize(
n = n(),
across(a:d, median),
)
```
`across()` has three particularly important arguments, which we'll discuss in detail in the following sections.
You'll use the first two every time you use `across()`: the first argument, `.cols`, specifies which columns you want to iterate over, and the second argument, `.fns`, specifies what to do with each column.
You can use the `.names` argument when you need additional control over the names of output columns, which is particularly important when you use `across()` with `mutate()`.
We'll also discuss two important variations, `if_any()` and `if_all()`, which work with `filter()`.
### Selecting columns with `.cols`
The first argument to `across()`, `.cols`, selects the columns to transform.
This uses the same specifications as `select()`, @sec-select, so you can use functions like `starts_with()` and `ends_with()` to select columns based on their name.
There are two additional selection techniques that are particularly useful for `across()`: `everything()` and `where()`.
`everything()` is straightforward: it selects every (non-grouping) column:
```{r}
df <- tibble(
grp = sample(2, 10, replace = TRUE),
a = rnorm(10),
b = rnorm(10),
c = rnorm(10),
d = rnorm(10)
)
df |>
group_by(grp) |>
summarize(across(everything(), median))
```
Note grouping columns (`grp` here) are not included in `across()`, because they're automatically preserved by `summarize()`.
`where()` allows you to select columns based on their type:
- `where(is.numeric)` selects all numeric columns.
- `where(is.character)` selects all string columns.
- `where(is.Date)` selects all date columns.
- `where(is.POSIXct)` selects all date-time columns.
- `where(is.logical)` selects all logical columns.
Just like other selectors, you can combine these with Boolean algebra.
For example, `!where(is.numeric)` selects all non-numeric columns, and `starts_with("a") & where(is.logical)` selects all logical columns whose name starts with "a".
### Calling a single function
The second argument to `across()` defines how each column will be transformed.
In simple cases, as above, this will be a single existing function.
This is a pretty special feature of R: we're passing one function (`median`, `mean`, `str_flatten`, ...) to another function (`across`).
This is one of the features that makes R a functional programming language.
It's important to note that we're passing this function to `across()`, so `across()` can call it; we're not calling it ourselves.
That means the function name should never be followed by `()`.
If you forget, you'll get an error:
```{r}
#| error: true
df |>
group_by(grp) |>
summarize(across(everything(), median()))
```
This error arises because you're calling the function with no input, e.g.:
```{r}
#| error: true
median()
```
### Calling multiple functions
In more complex cases, you might want to supply additional arguments or perform multiple transformations.
Let's motivate this problem with a simple example: what happens if we have some missing values in our data?
`median()` propagates those missing values, giving us a suboptimal output:
```{r}
rnorm_na <- function(n, n_na, mean = 0, sd = 1) {
sample(c(rnorm(n - n_na, mean = mean, sd = sd), rep(NA, n_na)))
}
df_miss <- tibble(
a = rnorm_na(5, 1),
b = rnorm_na(5, 1),
c = rnorm_na(5, 2),
d = rnorm(5)
)
df_miss |>
summarize(
across(a:d, median),
n = n()
)
```
It would be nice if we could pass along `na.rm = TRUE` to `median()` to remove these missing values.
To do so, instead of calling `median()` directly, we need to create a new function that calls `median()` with the desired arguments:
```{r}
df_miss |>
summarize(
across(a:d, function(x) median(x, na.rm = TRUE)),
n = n()
)
```
This is a little verbose, so R comes with a handy shortcut: for this sort of throw away, or **anonymous**[^iteration-1], function you can replace `function` with `\`[^iteration-2]:
[^iteration-1]: Anonymous, because we never explicitly gave it a name with `<-`.
Another term programmers use for this is "lambda function".
[^iteration-2]: In older code you might see syntax that looks like `~ .x + 1`.
This is another way to write anonymous functions but it only works inside tidyverse functions and always uses the variable name `.x`.
We now recommend the base syntax, `\(x) x + 1`.
```{r}
#| results: false
df_miss |>
summarize(
across(a:d, \(x) median(x, na.rm = TRUE)),
n = n()
)
```
In either case, `across()` effectively expands to the following code:
```{r}
#| eval: false
df_miss |>
summarize(
a = median(a, na.rm = TRUE),
b = median(b, na.rm = TRUE),
c = median(c, na.rm = TRUE),
d = median(d, na.rm = TRUE),
n = n()
)
```
When we remove the missing values from the `median()`, it would be nice to know just how many values were removed.
We can find that out by supplying two functions to `across()`: one to compute the median and the other to count the missing values.
You supply multiple functions by using a named list to `.fns`:
```{r}
df_miss |>
summarize(
across(a:d, list(
median = \(x) median(x, na.rm = TRUE),
n_miss = \(x) sum(is.na(x))
)),
n = n()
)
```
If you look carefully, you might intuit that the columns are named using a glue specification (@sec-glue) like `{.col}_{.fn}` where `.col` is the name of the original column and `.fn` is the name of the function.
That's not a coincidence!
As you'll learn in the next section, you can use `.names` argument to supply your own glue spec.
### Column names
The result of `across()` is named according to the specification provided in the `.names` argument.
We could specify our own if we wanted the name of the function to come first[^iteration-3]:
[^iteration-3]: You can't currently change the order of the columns, but you could reorder them after the fact using `relocate()` or similar.
```{r}
df_miss |>
summarize(
across(
a:d,
list(
median = \(x) median(x, na.rm = TRUE),
n_miss = \(x) sum(is.na(x))
),
.names = "{.fn}_{.col}"
),
n = n(),
)
```
The `.names` argument is particularly important when you use `across()` with `mutate()`.
By default, the output of `across()` is given the same names as the inputs.
This means that `across()` inside of `mutate()` will replace existing columns.
For example, here we use `coalesce()` to replace `NA`s with `0`:
```{r}
df_miss |>
mutate(
across(a:d, \(x) coalesce(x, 0))
)
```
If you'd like to instead create new columns, you can use the `.names` argument to give the output new names:
```{r}
df_miss |>
mutate(
across(a:d, \(x) coalesce(x, 0), .names = "{.col}_na_zero")
)
```
### Filtering
`across()` is a great match for `summarize()` and `mutate()` but it's more awkward to use with `filter()`, because you usually combine multiple conditions with either `|` or `&`.
It's clear that `across()` can help to create multiple logical columns, but then what?
So dplyr provides two variants of `across()` called `if_any()` and `if_all()`:
```{r}
# same as df_miss |> filter(is.na(a) | is.na(b) | is.na(c) | is.na(d))
df_miss |> filter(if_any(a:d, is.na))
# same as df_miss |> filter(is.na(a) & is.na(b) & is.na(c) & is.na(d))
df_miss |> filter(if_all(a:d, is.na))
```
### `across()` in functions
`across()` is particularly useful to program with because it allows you to operate on multiple columns.
For example, [Jacob Scott](https://twitter.com/_wurli/status/1571836746899283969) uses this little helper which wraps a bunch of lubridate functions to expand all date columns into year, month, and day columns:
```{r}
expand_dates <- function(df) {
df |>
mutate(
across(where(is.Date), list(year = year, month = month, day = mday))
)
}
df_date <- tibble(
name = c("Amy", "Bob"),
date = ymd(c("2009-08-03", "2010-01-16"))
)
df_date |>
expand_dates()
```
`across()` also makes it easy to supply multiple columns in a single argument because the first argument uses tidy-select; you just need to remember to embrace that argument, as we discussed in @sec-embracing.
For example, this function will compute the means of numeric columns by default.
But by supplying the second argument you can choose to summarize just selected columns:
```{r}
summarize_means <- function(df, summary_vars = where(is.numeric)) {
df |>
summarize(
across({{ summary_vars }}, \(x) mean(x, na.rm = TRUE)),
n = n(),
.groups = "drop"
)
}
diamonds |>
group_by(cut) |>
summarize_means()
diamonds |>
group_by(cut) |>
summarize_means(c(carat, x:z))
```
### Vs `pivot_longer()`
Before we go on, it's worth pointing out an interesting connection between `across()` and `pivot_longer()` (@sec-pivoting).
In many cases, you perform the same calculations by first pivoting the data and then performing the operations by group rather than by column.
For example, take this multi-function summary:
```{r}
df |>
summarize(across(a:d, list(median = median, mean = mean)))
```
We could compute the same values by pivoting longer and then summarizing:
```{r}
long <- df |>
pivot_longer(a:d) |>
group_by(name) |>
summarize(
median = median(value),
mean = mean(value)
)
long
```
And if you wanted the same structure as `across()` you could pivot again:
```{r}
long |>
pivot_wider(
names_from = name,
values_from = c(median, mean),
names_vary = "slowest",
names_glue = "{name}_{.value}"
)
```
This is a useful technique to know about because sometimes you'll hit a problem that's not currently possible to solve with `across()`: when you have groups of columns that you want to compute with simultaneously.
For example, imagine that our data frame contains both values and weights and we want to compute a weighted mean:
```{r}
df_paired <- tibble(
a_val = rnorm(10),
a_wts = runif(10),
b_val = rnorm(10),
b_wts = runif(10),
c_val = rnorm(10),
c_wts = runif(10),
d_val = rnorm(10),
d_wts = runif(10)
)
```
There's currently no way to do this with `across()`[^iteration-4], but it's relatively straightforward with `pivot_longer()`:
[^iteration-4]: Maybe there will be one day, but currently we don't see how.
```{r}
df_long <- df_paired |>
pivot_longer(
everything(),
names_to = c("group", ".value"),
names_sep = "_"
)
df_long
df_long |>
group_by(group) |>
summarize(mean = weighted.mean(val, wts))
```
If needed, you could `pivot_wider()` this back to the original form.
### Exercises
1. Practice your `across()` skills by:
1. Computing the number of unique values in each column of `palmerpenguins::penguins`.
2. Computing the mean of every column in `mtcars`.
3. Grouping `diamonds` by `cut`, `clarity`, and `color` then counting the number of observations and computing the mean of each numeric column.
2. What happens if you use a list of functions in `across()`, but don't name them?
How is the output named?
3. Adjust `expand_dates()` to automatically remove the date columns after they've been expanded.
Do you need to embrace any arguments?
4. Explain what each step of the pipeline in this function does.
What special feature of `where()` are we taking advantage of?
```{r}
#| results: false
show_missing <- function(df, group_vars, summary_vars = everything()) {
df |>
group_by(pick({{ group_vars }})) |>
summarize(
across({{ summary_vars }}, \(x) sum(is.na(x))),
.groups = "drop"
) |>
select(where(\(x) any(x > 0)))
}
nycflights13::flights |> show_missing(c(year, month, day))
```
## Reading multiple files
In the previous section, you learned how to use `dplyr::across()` to repeat a transformation on multiple columns.
In this section, you'll learn how to use `purrr::map()` to do something to every file in a directory.
Let's start with a little motivation: imagine you have a directory full of excel spreadsheets[^iteration-5] you want to read.
You could do it with copy and paste:
[^iteration-5]: If you instead had a directory of csv files with the same format, you can use the technique from @sec-readr-directory.
```{r}
#| eval: false
data2019 <- readxl::read_excel("data/y2019.xlsx")
data2020 <- readxl::read_excel("data/y2020.xlsx")
data2021 <- readxl::read_excel("data/y2021.xlsx")
data2022 <- readxl::read_excel("data/y2022.xlsx")
```
And then use `dplyr::bind_rows()` to combine them all together:
```{r}
#| eval: false
data <- bind_rows(data2019, data2020, data2021, data2022)
```
You can imagine that this would get tedious quickly, especially if you had hundreds of files, not just four.
The following sections show you how to automate this sort of task.
There are three basic steps: use `list.files()` to list all the files in a directory, then use `purrr::map()` to read each of them into a list, then use `purrr::list_rbind()` to combine them into a single data frame.
We'll then discuss how you can handle situations of increasing heterogeneity, where you can't do exactly the same thing to every file.
### Listing files in a directory
As the name suggests, `list.files()` lists the files in a directory.
You'll almost always use three arguments:
- The first argument, `path`, is the directory to look in.
- `pattern` is a regular expression used to filter the file names.
The most common pattern is something like `[.]xlsx$` or `[.]csv$` to find all files with a specified extension.
- `full.names` determines whether or not the directory name should be included in the output.
You almost always want this to be `TRUE`.
To make our motivating example concrete, this book contains a folder with 12 excel spreadsheets containing data from the gapminder package.
Each file contains one year's worth of data for 142 countries.
We can list them all with the appropriate call to `list.files()`:
```{r}
paths <- list.files("data/gapminder", pattern = "[.]xlsx$", full.names = TRUE)
paths
```
### Lists
Now that we have these 12 paths, we could call `read_excel()` 12 times to get 12 data frames:
```{r}
#| eval: false
gapminder_1952 <- readxl::read_excel("data/gapminder/1952.xlsx")
gapminder_1957 <- readxl::read_excel("data/gapminder/1957.xlsx")
gapminder_1962 <- readxl::read_excel("data/gapminder/1962.xlsx")
...,
gapminder_2007 <- readxl::read_excel("data/gapminder/2007.xlsx")
```
But putting each sheet into its own variable is going to make it hard to work with them a few steps down the road.
Instead, they'll be easier to work with if we put them into a single object.
A list is the perfect tool for this job:
```{r}
#| eval: false
files <- list(
readxl::read_excel("data/gapminder/1952.xlsx"),
readxl::read_excel("data/gapminder/1957.xlsx"),
readxl::read_excel("data/gapminder/1962.xlsx"),
...,
readxl::read_excel("data/gapminder/2007.xlsx")
)
```
```{r}
#| include: false
files <- map(paths, readxl::read_excel)
```
Now that you have these data frames in a list, how do you get one out?
You can use `files[[i]]` to extract the i<sup>th</sup> element:
```{r}
files[[3]]
```
We'll come back to `[[` in more detail in @sec-subset-one.
### `purrr::map()` and `list_rbind()`
The code to collect those data frames in a list "by hand" is basically just as tedious to type as code that reads the files one-by-one.
Happily, we can use `purrr::map()` to make even better use of our `paths` vector.
`map()` is similar to`across()`, but instead of doing something to each column in a data frame, it does something to each element of a vector.`map(x, f)` is shorthand for:
```{r}
#| eval: false
list(
f(x[[1]]),
f(x[[2]]),
...,
f(x[[n]])
)
```
So we can use `map()` to get a list of 12 data frames:
```{r}
files <- map(paths, readxl::read_excel)
length(files)
files[[1]]
```
(This is another data structure that doesn't display particularly compactly with `str()` so you might want to load it into RStudio and inspect it with `View()`).
Now we can use `purrr::list_rbind()` to combine that list of data frames into a single data frame:
```{r}
list_rbind(files)
```
Or we could do both steps at once in a pipeline:
```{r}
#| results: false
paths |>
map(readxl::read_excel) |>
list_rbind()
```
What if we want to pass in extra arguments to `read_excel()`?
We use the same technique that we used with `across()`.
For example, it's often useful to peek at the first few rows of the data with `n_max = 1`:
```{r}
paths |>
map(\(path) readxl::read_excel(path, n_max = 1)) |>
list_rbind()
```
This makes it clear that something is missing: there's no `year` column because that value is recorded in the path, not in the individual files.
We'll tackle that problem next.
### Data in the path {#sec-data-in-the-path}
Sometimes the name of the file is data itself.
In this example, the file name contains the year, which is not otherwise recorded in the individual files.
To get that column into the final data frame, we need to do two things:
First, we name the vector of paths.
The easiest way to do this is with the `set_names()` function, which can take a function.
Here we use `basename()` to extract just the file name from the full path:
```{r}
paths |> set_names(basename)
```
Those names are automatically carried along by all the map functions, so the list of data frames will have those same names:
```{r}
files <- paths |>
set_names(basename) |>
map(readxl::read_excel)
```
That makes this call to `map()` shorthand for:
```{r}
#| eval: false
files <- list(
"1952.xlsx" = readxl::read_excel("data/gapminder/1952.xlsx"),
"1957.xlsx" = readxl::read_excel("data/gapminder/1957.xlsx"),
"1962.xlsx" = readxl::read_excel("data/gapminder/1962.xlsx"),
...,
"2007.xlsx" = readxl::read_excel("data/gapminder/2007.xlsx")
)
```
You can also use `[[` to extract elements by name:
```{r}
files[["1962.xlsx"]]
```
Then we use the `names_to` argument to `list_rbind()` to tell it to save the names into a new column called `year` then use `readr::parse_number()` to extract the number from the string.
```{r}
paths |>
set_names(basename) |>
map(readxl::read_excel) |>
list_rbind(names_to = "year") |>
mutate(year = parse_number(year))
```
In more complicated cases, there might be other variables stored in the directory name, or maybe the file name contains multiple bits of data.
In that case, use `set_names()` (without any arguments) to record the full path, and then use `tidyr::separate_wider_delim()` and friends to turn them into useful columns.
```{r}
paths |>
set_names() |>
map(readxl::read_excel) |>
list_rbind(names_to = "year") |>
separate_wider_delim(year, delim = "/", names = c(NA, "dir", "file")) |>
separate_wider_delim(file, delim = ".", names = c("file", "ext"))
```
### Save your work
Now that you've done all this hard work to get to a nice tidy data frame, it's a great time to save your work:
```{r}
gapminder <- paths |>
set_names(basename) |>
map(readxl::read_excel) |>
list_rbind(names_to = "year") |>
mutate(year = parse_number(year))
write_csv(gapminder, "gapminder.csv")
```
Now when you come back to this problem in the future, you can read in a single csv file.
For large and richer datasets, using parquet might be a better choice than `.csv`, as discussed in @sec-parquet.
```{r}
#| include: false
unlink("gapminder.csv")
```
If you're working in a project, we suggest calling the file that does this sort of data prep work something like `0-cleanup.R`.
The `0` in the file name suggests that this should be run before anything else.
If your input data files change over time, you might consider learning a tool like [targets](https://docs.ropensci.org/targets/) to set up your data cleaning code to automatically re-run whenever one of the input files is modified.
### Many simple iterations
Here we've just loaded the data directly from disk, and were lucky enough to get a tidy dataset.
In most cases, you'll need to do some additional tidying, and you have two basic options: you can do one round of iteration with a complex function, or do multiple rounds of iteration with simple functions.
In our experience most folks reach first for one complex iteration, but you're often better by doing multiple simple iterations.
For example, imagine that you want to read in a bunch of files, filter out missing values, pivot, and then combine.
One way to approach the problem is to write a function that takes a file and does all those steps then call `map()` once:
```{r}
#| eval: false
process_file <- function(path) {
df <- read_csv(path)
df |>
filter(!is.na(id)) |>
mutate(id = tolower(id)) |>
pivot_longer(jan:dec, names_to = "month")
}
paths |>
map(process_file) |>
list_rbind()
```
Alternatively, you could perform each step of `process_file()` to every file:
```{r}
#| eval: false
paths |>
map(read_csv) |>
map(\(df) df |> filter(!is.na(id))) |>
map(\(df) df |> mutate(id = tolower(id))) |>
map(\(df) df |> pivot_longer(jan:dec, names_to = "month")) |>
list_rbind()
```
We recommend this approach because it stops you getting fixated on getting the first file right before moving on to the rest.
By considering all of the data when doing tidying and cleaning, you're more likely to think holistically and end up with a higher quality result.
In this particular example, there's another optimization you could make, by binding all the data frames together earlier.
Then you can rely on regular dplyr behavior:
```{r}
#| eval: false
paths |>
map(read_csv) |>
list_rbind() |>
filter(!is.na(id)) |>
mutate(id = tolower(id)) |>
pivot_longer(jan:dec, names_to = "month")
```
### Heterogeneous data
Unfortunately, sometimes it's not possible to go from `map()` straight to `list_rbind()` because the data frames are so heterogeneous that `list_rbind()` either fails or yields a data frame that's not very useful.
In that case, it's still useful to start by loading all of the files:
```{r}
#| eval: false
files <- paths |>
map(readxl::read_excel)
```
Then a very useful strategy is to capture the structure of the data frames so that you can explore it using your data science skills.
One way to do so is with this handy `df_types` function[^iteration-6] that returns a tibble with one row for each column:
[^iteration-6]: We're not going to explain how it works, but if you look at the docs for the functions used, you should be able to puzzle it out.
```{r}
df_types <- function(df) {
tibble(
col_name = names(df),
col_type = map_chr(df, vctrs::vec_ptype_full),
n_miss = map_int(df, \(x) sum(is.na(x)))
)
}
df_types(gapminder)
```
You can then apply this function to all of the files, and maybe do some pivoting to make it easier to see where the differences are.
For example, this makes it easy to verify that the gapminder spreadsheets that we've been working with are all quite homogeneous:
```{r}
files |>
map(df_types) |>
list_rbind(names_to = "file_name") |>
select(-n_miss) |>
pivot_wider(names_from = col_name, values_from = col_type)
```
If the files have heterogeneous formats, you might need to do more processing before you can successfully merge them.
Unfortunately, we're now going to leave you to figure that out on your own, but you might want to read about `map_if()` and `map_at()`.
`map_if()` allows you to selectively modify elements of a list based on their values; `map_at()` allows you to selectively modify elements based on their names.
### Handling failures
Sometimes the structure of your data might be sufficiently wild that you can't even read all the files with a single command.
And then you'll encounter one of the downsides of `map()`: it succeeds or fails as a whole.
`map()` will either successfully read all of the files in a directory or fail with an error, reading zero files.
This is annoying: why does one failure prevent you from accessing all the other successes?
Luckily, purrr comes with a helper to tackle this problem: `possibly()`.
`possibly()` is what's known as a function operator: it takes a function and returns a function with modified behavior.
In particular, `possibly()` changes a function from erroring to returning a value that you specify:
```{r}
files <- paths |>
map(possibly(\(path) readxl::read_excel(path), NULL))
data <- files |> list_rbind()
```
This works particularly well here because `list_rbind()`, like many tidyverse functions, automatically ignores `NULL`s.
Now you have all the data that can be read easily, and it's time to tackle the hard part of figuring out why some files failed to load and what to do about it.
Start by getting the paths that failed:
```{r}
failed <- map_vec(files, is.null)
paths[failed]
```
Then call the import function again for each failure and figure out what went wrong.
## Saving multiple outputs
In the last section, you learned about `map()`, which is useful for reading multiple files into a single object.
In this section, we'll now explore sort of the opposite problem: how can you take one or more R objects and save it to one or more files?
We'll explore this challenge using three examples:
- Saving multiple data frames into one database.
- Saving multiple data frames into multiple `.csv` files.
- Saving multiple plots to multiple `.png` files.
### Writing to a database {#sec-save-database}
Sometimes when working with many files at once, it's not possible to fit all your data into memory at once, and you can't do `map(files, read_csv)`.
One approach to deal with this problem is to load your data into a database so you can access just the bits you need with dbplyr.
If you're lucky, the database package you're using will provide a handy function that takes a vector of paths and loads them all into the database.
This is the case with duckdb's `duckdb_read_csv()`:
```{r}
#| eval: false
con <- DBI::dbConnect(duckdb::duckdb())
duckdb::duckdb_read_csv(con, "gapminder", paths)
```
This would work well here, but we don't have csv files, instead we have excel spreadsheets.
So we're going to have to do it "by hand".
Learning to do it by hand will also help you when you have a bunch of csvs and the database that you're working with doesn't have one function that will load them all in.
We need to start by creating a table that we will fill in with data.
The easiest way to do this is by creating a template, a dummy data frame that contains all the columns we want, but only a sampling of the data.
For the gapminder data, we can make that template by reading a single file and adding the year to it:
```{r}
template <- readxl::read_excel(paths[[1]])
template$year <- 1952
template
```
Now we can connect to the database, and use `DBI::dbCreateTable()` to turn our template into a database table:
```{r}
con <- DBI::dbConnect(duckdb::duckdb())
DBI::dbCreateTable(con, "gapminder", template)
```
`dbCreateTable()` doesn't use the data in `template`, just the variable names and types.
So if we inspect the `gapminder` table now you'll see that it's empty but it has the variables we need with the types we expect:
```{r}
con |> tbl("gapminder")
```
Next, we need a function that takes a single file path, reads it into R, and adds the result to the `gapminder` table.
We can do that by combining `read_excel()` with `DBI::dbAppendTable()`:
```{r}
append_file <- function(path) {
df <- readxl::read_excel(path)
df$year <- parse_number(basename(path))
DBI::dbAppendTable(con, "gapminder", df)
}
```
Now we need to call `append_file()` once for each element of `paths`.
That's certainly possible with `map()`:
```{r}
#| eval: false
paths |> map(append_file)
```
But we don't care about the output of `append_file()`, so instead of `map()` it's slightly nicer to use `walk()`.
`walk()` does exactly the same thing as `map()` but throws the output away:
```{r}
paths |> walk(append_file)
```
Now we can see if we have all the data in our table:
```{r}
con |>
tbl("gapminder") |>
count(year)
```
```{r}
#| include: false
DBI::dbDisconnect(con, shutdown = TRUE)
```
### Writing csv files
The same basic principle applies if we want to write multiple csv files, one for each group.
Let's imagine that we want to take the `ggplot2::diamonds` data and save one csv file for each `clarity`.
First we need to make those individual datasets.
There are many ways you could do that, but there's one way we particularly like: `group_nest()`.
```{r}
by_clarity <- diamonds |>
group_nest(clarity)
by_clarity
```
This gives us a new tibble with eight rows and two columns.
`clarity` is our grouping variable and `data` is a list-column containing one tibble for each unique value of `clarity`:
```{r}
by_clarity$data[[1]]
```
While we're here, let's create a column that gives the name of output file, using `mutate()` and `str_glue()`:
```{r}
by_clarity <- by_clarity |>
mutate(path = str_glue("diamonds-{clarity}.csv"))
by_clarity
```
So if we were going to save these data frames by hand, we might write something like:
```{r}
#| eval: false
write_csv(by_clarity$data[[1]], by_clarity$path[[1]])
write_csv(by_clarity$data[[2]], by_clarity$path[[2]])
write_csv(by_clarity$data[[3]], by_clarity$path[[3]])
...
write_csv(by_clarity$by_clarity[[8]], by_clarity$path[[8]])
```
This is a little different to our previous uses of `map()` because there are two arguments that are changing, not just one.
That means we need a new function: `map2()`, which varies both the first and second arguments.
And because we again don't care about the output, we want `walk2()` rather than `map2()`.
That gives us:
```{r}
walk2(by_clarity$data, by_clarity$path, write_csv)
```
```{r}
#| include: false
unlink(by_clarity$path)
```
### Saving plots
We can take the same basic approach to create many plots.
Let's first make a function that draws the plot we want:
```{r}
#| fig-alt: |
#| Histogram of carats of diamonds from the by_clarity dataset, ranging from
#| 0 to 5 carats. The distribution is unimodal and right skewed with a peak
#| around 1 carat.
carat_histogram <- function(df) {
ggplot(df, aes(x = carat)) + geom_histogram(binwidth = 0.1)
}
carat_histogram(by_clarity$data[[1]])
```
Now we can use `map()` to create a list of many plots[^iteration-7] and their eventual file paths:
[^iteration-7]: You can print `by_clarity$plot` to get a crude animation --- you'll get one plot for each element of `plots`.
NOTE: this didn't happen for me.
```{r}
by_clarity <- by_clarity |>
mutate(
plot = map(data, carat_histogram),
path = str_glue("clarity-{clarity}.png")
)
```
Then use `walk2()` with `ggsave()` to save each plot:
```{r}
walk2(
by_clarity$path,
by_clarity$plot,
\(path, plot) ggsave(path, plot, width = 6, height = 6)
)
```
This is shorthand for:
```{r}
#| eval: false
ggsave(by_clarity$path[[1]], by_clarity$plot[[1]], width = 6, height = 6)
ggsave(by_clarity$path[[2]], by_clarity$plot[[2]], width = 6, height = 6)
ggsave(by_clarity$path[[3]], by_clarity$plot[[3]], width = 6, height = 6)
...
ggsave(by_clarity$path[[8]], by_clarity$plot[[8]], width = 6, height = 6)
```
```{r}
#| include: false
unlink(by_clarity$path)
```