From cd4a58c6b0ba0c871266803106d08fb8a624e0e8 Mon Sep 17 00:00:00 2001 From: Leonardo Shibata <9448016+leonardoshibata@users.noreply.github.com> Date: Tue, 3 Dec 2019 23:52:48 -0300 Subject: [PATCH] Fixed typo From "pereform" to perform --- analysis.Rmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/analysis.Rmd b/analysis.Rmd index 1e43d17..6a22773 100644 --- a/analysis.Rmd +++ b/analysis.Rmd @@ -12,7 +12,7 @@ knitr::opts_chunk$set(eval = FALSE) Previous chapters focused on introducing Spark with R, they got you up to speed and encouraged you to try basic data analysis workflows. However, they have not properly introduced what such data analysis means, especially while running in Spark. They presented the tools you need throughout this book, to help you spend more time learning and less time troubleshooting. -This chapter will introduce tools and concepts to perform data analysis in Spark from R; which, spoiler alert, are the same tools you use when using plain R! This is not an accidental coincidence; but rather, we want data scientist to live in a world where technology is hidden from them, where you can use the R packages you know and love, and where they simply happen to just work in Spark! Now, we are not quite there yet, but we are also not that far. In this chapter you will learn widely used R packages and practices to pereform data analysis like: `dplyr`, `ggplot2`, formulas, `rmarkdown` and so on -- which also happen to work in Spark! +This chapter will introduce tools and concepts to perform data analysis in Spark from R; which, spoiler alert, are the same tools you use when using plain R! This is not an accidental coincidence; but rather, we want data scientist to live in a world where technology is hidden from them, where you can use the R packages you know and love, and where they simply happen to just work in Spark! Now, we are not quite there yet, but we are also not that far. In this chapter you will learn widely used R packages and practices to perform data analysis like: `dplyr`, `ggplot2`, formulas, `rmarkdown` and so on -- which also happen to work in Spark! The next chapter, Modeling, will focus on creating statistical models to predict, estimate and describe datasets; but first, let's get started with analysis!