Skip to content
Open
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 10 additions & 3 deletions R/test_likelihoodratio.R
Original file line number Diff line number Diff line change
Expand Up @@ -189,12 +189,14 @@ test_likelihoodratio.ListNestedRegressions <- function(
} else {
# lmtest::lrtest()
lls <- sapply(objects, insight::get_loglikelihood, REML = REML, check_response = TRUE)
chi2 <- abs(c(NA, -2 * diff(lls)))
criterion <- -2 * lls
chi2 <- abs(c(NA, diff(criterion)))
p <- stats::pchisq(chi2, abs(dfs_diff), lower.tail = FALSE)

out <- data.frame(
df = dfs,
df_diff = dfs_diff,
Criterion = criterion,
Chi2 = chi2,
p = p,
stringsAsFactors = FALSE
Expand Down Expand Up @@ -228,10 +230,13 @@ test_likelihoodratio.ListNestedRegressions <- function(
test_likelihoodratio_ListLavaan <- function(..., objects = NULL) {
insight::check_if_installed("lavaan")

# Create data frame with info about model name and class
# Create data frame with info about model name, class, and criterion
names_types <- data.frame(
Model = names(objects),
Type = sapply(objects, function(x) class(x)[1]),
Criterion = sapply(objects, function(x) {
-2 * as.numeric(lavaan::fitMeasures(x, "logl"))
}),
stringsAsFactors = FALSE
)

Expand All @@ -250,11 +255,13 @@ test_likelihoodratio_ListLavaan <- function(..., objects = NULL) {
# Bind all data
out <- merge(names_types, out[c("Model", "df", "df_diff", "Chi2", "p")], by = "Model")

# Reorder columns so Criterion is next to Chi2
out <- out[c("Model", "Type", "df", "df_diff", "Criterion", "Chi2", "p")]

class(out) <- c("test_likelihoodratio", "see_test_likelihoodratio", "data.frame")
out
}


# helper ----------------------

.is_lmer_reml <- function(x) {
Expand Down
15 changes: 7 additions & 8 deletions R/test_performance.R
Original file line number Diff line number Diff line change
Expand Up @@ -113,19 +113,18 @@
#' the Wald test for small sample sizes (under or about 30) or if the
#' parameters are large.
#'
#' The test statistic is calculated by comparing the -2 * log-likelihood
#' (-2LL) values for each model. In the output table, the `Criterion` column
#' represents this -2LL value. The difference in the criterion values
#' between the nested models corresponds to the `Chi2` statistic. This
#' Chi-square value is then used to compute the p-value based on the
#' difference in degrees of freedom (`df_diff`).
#'
#' Note: for regression models, this is similar to
#' `anova(..., test="LRT")` (on models) or `lmtest::lrtest(...)`, depending
#' on the `estimator` argument. For **lavaan** models (SEM, CFA), the function
#' calls `lavaan::lavTestLRT()`.
#'
#' For models with transformed response variables (like `log(x)` or `sqrt(x)`),
#' `logLik()` returns a wrong log-likelihood. However, `test_likelihoodratio()`
#' calls `insight::get_loglikelihood()` with `check_response=TRUE`, which
#' returns a corrected log-likelihood value for models with transformed
#' response variables. Furthermore, since the LRT only accepts nested
#' models (i.e. models that differ in their fixed effects), the computed
#' log-likelihood is always based on the ML estimator, not on the REML fits.
#'
#' - **Vuong's Test** - `test_vuong()`: Vuong's (1989) test can
#' be used both for nested and non-nested models, and actually consists of two
#' tests.
Expand Down
15 changes: 7 additions & 8 deletions man/test_performance.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

10 changes: 5 additions & 5 deletions tests/testthat/_snaps/model_performance.psych.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@

Chi2(1) | p (Chi2) | RMSR
--------------------------
4.119 | 0.042 | 0.035
4.119 | 0.042 | 0.025

---

Expand All @@ -113,8 +113,8 @@

Model | Chi2 | df | p (Chi2) | RMSR | RMSR_corrected | TLI | RMSEA | RMSEA 90% CI | BIC | R2 | Correlation
-----------------------------------------------------------------------------------------------------------------------------------
3-factor solution | 31.796 | 25 | 0.164 | 0.015 | 0.023 | | 0.087 | [0.000, 0.181] | -54.8 | |
g-model | 264.781 | 44 | < .001 | 0.393 | 0.440 | 0.195 | 0.395 | [0.356, 0.450] | 112.3 | 0.761 | 0.873
3-factor solution | 31.796 | 25 | 0.164 | 0.011 | 0.016 | | 0.087 | [0.000, 0.181] | -54.8 | |
g-model | 264.781 | 44 | < .001 | 0.278 | 0.311 | 0.195 | 0.395 | [0.356, 0.450] | 112.3 | 0.886 | 0.941

Compare the model fit of the 3-factor solution with the g-only model.
If the g-model has smaller RMSR and RMSEA then your items are more
Expand All @@ -131,8 +131,8 @@

Model | Chi2 | df | p (Chi2) | RMSR | RMSR_corrected | TLI | RMSEA | RMSEA 90% CI | BIC | R2 | Correlation
-----------------------------------------------------------------------------------------------------------------------------------
3-factor solution | 31.796 | 25 | 0.164 | 0.015 | 0.023 | | 0.087 | [0.000, 0.181] | -54.8 | |
g-model | 264.781 | 44 | < .001 | 0.393 | 0.440 | 0.195 | 0.395 | [0.356, 0.450] | 112.3 | 0.761 | 0.873
3-factor solution | 31.796 | 25 | 0.164 | 0.011 | 0.016 | | 0.087 | [0.000, 0.181] | -54.8 | |
g-model | 264.781 | 44 | < .001 | 0.278 | 0.311 | 0.195 | 0.395 | [0.356, 0.450] | 112.3 | 0.886 | 0.941

Compare the model fit of the 3-factor solution with the g-only model.
If the g-model has smaller RMSR and RMSEA then your items are more
Expand Down
12 changes: 0 additions & 12 deletions tests/testthat/_snaps/nestedLogit.md

This file was deleted.

70 changes: 70 additions & 0 deletions tests/testthat/test-test_likelihoodratio.R
Original file line number Diff line number Diff line change
Expand Up @@ -125,3 +125,73 @@ test_that("test_likelihoodratio - print p-digits", {
expect_snapshot(test_likelihoodratio(m1, m2))
expect_snapshot(print_md(test_likelihoodratio(m1, m2), p_digits = 3))
})

test_that("test_likelihoodratio - Criterion values (lm)", {
m1 <- lm(mpg ~ wt + cyl + gear + disp, data = mtcars)
m2 <- lm(mpg ~ wt + cyl + gear, data = mtcars)

# Check ML estimator since OLS defaults to .test_wald()
rez <- test_likelihoodratio(m2, m1, estimator = "ML")

# Check if column exists
expect_true("Criterion" %in% colnames(rez))

# Check exact values against manual -2LL calculation
ll1 <- as.numeric(stats::logLik(m1))
ll2 <- as.numeric(stats::logLik(m2))
expect_equal(rez$Criterion, -2 * c(ll2, ll1), tolerance = 1e-3)

# Check that the difference in Criterion matches Chi2 exactly
expect_equal(rez$Chi2[2], rez$Criterion[1] - rez$Criterion[2], tolerance = 1e-3)
})

test_that("test_likelihoodratio - Criterion values (lme4)", {
skip_if_not_installed("lme4")

m1 <- suppressMessages(lme4::lmer(
Sepal.Length ~ Petal.Width + (1 | Species),
data = iris,
REML = FALSE
))
m2 <- suppressMessages(lme4::lmer(
Sepal.Length ~ Petal.Width + Petal.Length + (1 | Species),
data = iris,
REML = FALSE
))

rez <- test_likelihoodratio(m1, m2, estimator = "ML")

expect_true("Criterion" %in% colnames(rez))

# Check values
ll1 <- as.numeric(stats::logLik(m1))
ll2 <- as.numeric(stats::logLik(m2))
expect_equal(rez$Criterion, -2 * c(ll1, ll2), tolerance = 1e-3)

# Check math
expect_equal(rez$Chi2[2], rez$Criterion[1] - rez$Criterion[2], tolerance = 1e-3)
})

test_that("test_likelihoodratio - Criterion values (lavaan)", {
skip_if_not_installed("lavaan")

structure1 <- " visual =~ x1 + x2 + x3
textual =~ x4 + x5 + x6
speed =~ x7 + x8 + x9
visual ~~ textual + speed "
m1 <- suppressWarnings(lavaan::cfa(structure1, data = lavaan::HolzingerSwineford1939))

structure2 <- " visual =~ x1 + x2 + x3
textual =~ x4 + x5 + x6
speed =~ x7 + x8 + x9
visual ~~ 0 * textual + speed "
m2 <- suppressWarnings(lavaan::cfa(structure2, data = lavaan::HolzingerSwineford1939))

rez <- test_likelihoodratio(m1, m2)

expect_true("Criterion" %in% colnames(rez))

ll1 <- as.numeric(lavaan::fitMeasures(m1, "logl"))
ll2 <- as.numeric(lavaan::fitMeasures(m2, "logl"))
expect_equal(rez$Criterion, -2 * c(ll1, ll2), tolerance = 1e-3)
})
Loading