Summarizing Data in R Base Package

Introduction to Summarizing Data in R

Data summarization (getting different summary statistics) is a fundamental step in exploratory data analysis (EDA). Summarizing data in R Language helps analysts to understand the patterns, detect anomalies, and derive insights. While modern R packages like dplyr and data.table offers streamlined approaches. However, Base R remains a powerful and efficient tool for quick data summarization without additional dependencies (packages).

This guide explores essential Base R functions for summarizing data, from basic statistics to advanced grouped operations, ensuring you can efficiently analyze datasets right out of the box.

For learning purposes, we will use the mtcars data set.

Key Functions for Basic Summary Statistics

There are several Base R functions for computing summary statistics. The summary() function offers a quick overview of a dataset, displaying minimum, maximum, mean, median, and quartiles for numerical variables. On the other hand, the categorical variables are summarized with frequency counts. For more specific metrics, functions like mean(), median(), sd(), and var() calculate central tendency and dispersion, while min() and max() functions can be used to identify the data range. These functions are particularly useful when combined with na.rm = TRUE to handle missing values. For example, applying summary(mtcars) gives an immediate snapshot of the dataset, while mean(mtcars$mpg, na.rm = TRUE) computes the average miles per gallon.

Frequency Counts and Cross-Tabulations

When working with categorical data, the table() function is indispensable for generating frequency distributions. It counts occurrences of unique values, making it ideal for summarizing factors or discrete variables. For more complex relationships, xtabs() or ftable() can create cross-tabulations, revealing interactions between multiple categorical variables. For instance, table(mtcars$cyl) shows how many cars have 4, 6, or 8 cylinders, while xtabs(~ gear + cyl, data = mtcars) presenting a contingency table between gears and cylinders.

attach(mtcars)

# Frequency of cylinders
table(cyl)

# contingency table of gears and cylinders
xtabs(~ gear + cyl, data = mtcars)
Summarizing Data in R Language

Group-Wise Summarization Using aggregate() and by()

To compute summary statistics by groups, Base R offers aggregate() and by(). The aggregate() function splits data into subsets and applies a summary function, such as mean or sum, to each group. For example, aggregate(mpg ~ cyl, data = mtcars, FUN = mean) calculate the average MPG per cylinder group. Meanwhile, by() provides more flexibility, allowing custom functions to be applied across groups. While tapply() is another alternative for vector-based grouping, aggregate() is often preferred for its formula interface and cleaner output.

# Average for each cylinder of the vehicle
aggregate(mpg ~ cyl, data = mtcars, FUN = mean)

## Output
  cyl      mpg
1   4 26.66364
2   6 19.74286
3   8 15.10000

Advanced Techniques: Quantiles and Custom Summaries

Beyond basic summaries, Base R supports advanced techniques like percentile analysis using quantile(), which helps assess data distribution by returning specified percentiles (e.g., quantile(mtcars$mpg, probs = c(0.25, 0.5, 0.75))). For customized summaries, users can define their own functions and apply them using sapply() or lapply(). This approach is useful when needing tailored metrics, such as trimmed means or confidence intervals. Additionally, combining these functions with plotting tools like boxplot() or hist() can further enhance data interpretation.

# percentiles
quantile(mtcars$mpg, probs = c(0.25, 0.5, 0.75))

## Output
   25%    50%    75% 
15.425 19.200 22.800 

boxplot(quantile(mtcars$mpg, probs = c(0.25, 0.5, 0.75)) )
Data Visualization Summarizing Data in R Base Package

When to Use Base R vs. Tidyverse for Summarization

While Base R is efficient and lightweight, the Tidyverse (particularly dplyr) offers a more readable syntax for complex operations. Functions like summarize() and group_by() simplify chained operations, making them preferable for large-scale data wrangling. However, Base R remains advantageous for quick analyses, legacy code, or environments where installing additional packages is restricted. Understanding both approaches ensures flexibility in different analytical scenarios.

Best Effective Practices for Summarizing Data in R

To maximize efficiency, always handle missing values explicitly using na.rm = TRUE in statistical functions. For large datasets, consider optimizing performance by pre-filtering data or using vectorized operations. Visualizing summaries with basic plots (e.g., hist(), boxplot()) can provide immediate insights. Finally, documenting summary steps ensures reproducibility, whether in scripts, R Markdown, or Shiny applications.

In summary, the Base R provides a robust toolkit for data summarization, from simple descriptive statistics to advanced grouped analyses. By mastering functions like summary(), table(), aggregate(), and quantile(), analysts can efficiently explore datasets without relying on external packages. While modern alternatives like dplyr enhance readability for complex tasks, Base R’s simplicity and universality make it an essential skill for every R programmer. Practicing these techniques on real-world datasets will solidify your understanding and improve your data analysis workflow.

Dimensionality Reduction in Machine Learning

Comparing Two Sample Means in R

Comparing Two Sample Means in R

One can easily compare two sample means in R, as in R language all the classical tests are available in the package stats. There are different comparison tests such as (i) one sample mean test, (ii) two independent sample means test, and (iii) dependent sample test. When population standard deviation is known, or sample size (number of observations in the sample) is large enough ($n\ge 30), tests related to normal distribution are performed.

Data for Two Sample Means

Consider the following data set on the “latent heat of the fusion of ice (cal/gm)” from Rice, 1995.

Method A79.9880.0480.0280.0480.0380.0380.0479.9780.05
80.0380.0280.0080.02
Method B80.0279.9479.9879.9779.9780.0379.9579.97

Let us draw boxplots to make a comparison between two these two methods. The comparison will help in checking the assumption of the independent two-sample test.

Note that one can read the data using the scan() function, create vectors, or even read the above data from data files such as *.txt and *.csv. In this tutorial, we assume vectors $A$ and $B$ for method A and method B.

A = c(79.98, 80.04, 80.02, 80.04, 80.03, 80.03, 80.04, 79.97, 80.05, 80.03, 80.02, 80.00, 80.02)
B = c(80.02, 79.94, 79.98, 79.97, 79.97, 80.03, 79.95, 79.97)

Draw a Boxplot of Samples

Let us draw boxplots for each method that indicate the first group tends to give higher results than the second one.

boxplot(A, B)
Comparing Two Sample Means in R

Comparing Two Sample Means in R using t.test() Function

The unpaired t-test (independent two-sample test) for the equality of the means can be done using the function t.test() in R Language.

t.test(A, B)
t.test in R Language

From the results above, one can see that the p-value = 0.006939 is less than 0.05 (level of significance) which means that on average both methods are statistically different from each other with reference to latent heat of fusion of ice.

Testing the Equality of Variances of Samples

Note that, the R language does not assume the equality of variances in the two samples. However, the F-test can be used to check/test the equality in the variances, provided that the two samples are from normal populations.

var.test(A, B)
Testing the equality of variances in R

From the above results, there is no evidence that the variances of both samples are statistically significant, as the p-value is greater than the 0.05 level of significance. It means that one can use the classical t-test that assumes the equality of the variances.

t.test(A, B, var.equa. = TRUE)

## Output
        Welch Two Sample t-test

data:  A and B
t = 3.2499, df = 12.027, p-value = 0.006939
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 0.01385526 0.07018320
sample estimates:
mean of x mean of y 
 80.02077  79.97875 

https://rfaqs.com

https://gmstat.com

Statistical Power Analysis in R: A Comprehensive Guide

Introduction to Power Analysis

The post is about statistical power analysis in R. First, define the meaning of power in statistics. The power is the probability ($1-\beta$) of detecting an effect given that the effect is here. Power is the probability of correctly rejecting the null hypothesis when it is false.

Suppose, a simple study of a drug-A and a placebo. Let the drug be truly effective. The power is the probability of finding a difference between two groups (drug-A and placebo group). Imagine that a power of $1-\beta=0.8$ (having a power of 0.8 means that 80% of the time, there will be statistically significant differences between the drug-A and the placebo group, whereas there are 20% of the time, the statistically significant effect will not be obtained between two groups). Also, note that this study was conducted many times. Therefore, the probability of a Type-II error is $\beta=0.2$.

One-Sample Power

The following plot is for a one-sample one-tailed greater than t-test. In the graph below, let the null hypothesis $H_0:\mu = \mu_0$ be true, and the test statistic $t$ follows the null distribution indicated by the hashed area. Under the specific alternative hypothesis, $H_1:\mu = \mu_1$, the test statistic $t$ follows the distribution shown by solid area.

The $\alpha$ is the probability of making a type-I error (that is rejecting $H_0$ when it is true), and the “crit. Val” is the location of the $t_{crit}$ value associated with $H_0$ on the scale of the data. The rejection region is the area under $H_0$ at least as far as $crit. val.” is from $\mu_0$.

The test’s power ($1-\beta$) is the green area, the area under $H_1$ in the rejection region. A type-II error is made when $H_1$ is true, but we fail to reject $H_0$ in the red region.

Type-II Error and Power Analysis in R

#One Sample Power

x <- seq(-4, 4, length = 1000)
hx <- dnorm(x, mean = 0, sd = 1)

plot(x, hx, type = "n", xlim = c(-4, 8), ylim = c(0, 0.5),
     main = expression (paste("Type-II Error (", beta, ") and Power (", 1 - beta, ")")), 
     axes = FALSE)

# one-tailed shift
shift = qnorm (1 - 0.05, mean=0, sd = 1 )*1.7
xfit2 = x + shift
yfit2 = dnorm(xfit2, mean=shift, sd = 1 )

axis (1, at = c(-qnorm(0.05), 0, shift), labels = expression("crit. val.", mu[0], mu[1]))
axis(1, at = c(-4, 4 + shift), labels = expression(-infinity, infinity), 
     lwd = 1, lwd.tick = FALSE)

# The alternative hypothesis area 
# the red - underpowered area

lb <- min(xfit2)               # lower bound
ub <- round(qnorm(0.95), 2)    # upper bound
col1 = "#CC2222"

i <- xfit2 >= lb & xfit2 <= ub
polygon(c(lb, xfit2[i], ub), c(0, yfit2[i],0), col = col1)

# The green area where the power is
col2 = "#22CC22"
i <- xfit2 >= ub
polygon(c(ub, xfit2[i], max(xfit2)), c(0, yfit2[i], 0), col = col2)

# Outline the alternative hypothesis
lines(xfit2, yfit2, lwd = 2)

# Print null hypothesis area
col_null = "#AAAAAA"
polygon (c(min(x), x, max(x)), c(0, hx, 0), col = col_null,
         lwd = 2, density = c(10, 40), angle = -45, border = 0)

lines(x, hx, lwd = 2, lty = "dashed", col=col_null)

axis(1, at = (c(ub, max(xfit2))), labels = c("", expression(infinity)), col = col2,
     lwd = 1, lwd.tick = FALSE)

#Legend
legend("topright", inset = 0.015, title = "Color", 
       c("Null Hypothesis", "Type-II error", "Power"), fill = c(col_null, col1, col2), 
       angle = -45, density = c(20, 1000, 1000), horiz = FALSE)

abline(v=ub, lwd=2, col="#000088", lty = "dashed")
arrows(ub, 0.45, ub+1, 0.45, lwd=3, col="#008800")
arrows(ub, 0.45, ub-1, 0.45, lwd=3, col="#880000")
Type-II Error and Power Analysis in R
Frequently Asked Questions About R: Power Analysis in R

Online Quiz Website

Statistics and Data Analysis