Skip to contents

Implements the Core HATSA algorithm to align functional connectivity patterns across subjects. This version uses sparse matrices for graph representations, efficient eigendecomposition via `PRIMME`, and incorporates robustness improvements based on detailed audits.

Usage

run_hatsa_core(
  subject_data_list,
  anchor_indices,
  spectral_rank_k,
  k_conn_pos,
  k_conn_neg,
  n_refine,
  use_dtw = FALSE,
  n_cores = 1L
)

Arguments

subject_data_list

A list of dense numeric matrices. Each matrix `X_i` corresponds to a subject, with dimensions `T_i` (time points) x `V_p` (parcels).

anchor_indices

A numeric vector of 1-based indices for the selected anchor parcels. Duplicate indices will be removed.

spectral_rank_k

An integer specifying the dimensionality (`k`) of the low-dimensional spectral sketch. Must be non-negative. `k=0` yields empty sketches.

k_conn_pos

An integer. For graph sparsification, number of strongest positive connections to retain per parcel.

k_conn_neg

An integer. For graph sparsification, number of strongest negative connections to retain per parcel.

n_refine

An integer, number of GPA refinement iterations.

use_dtw

Logical, defaults to `FALSE`. If `TRUE` (not yet fully implemented), Dynamic Time Warping would be considered in graph construction similarity.

n_cores

Integer number of CPU cores to use. If `> 1` and the platform supports forking (i.e., non-Windows), per-subject computations are parallelized via parallel::mclapply. Defaults to 1 (sequential).

Value

A `hatsa_projector` object. This S3 object inherits from `multiblock_biprojector` (from the `multivarious` package) and contains the results of the HATSA analysis. Key components include:

  • v: The mean aligned sketch (group-level template, V_p x k matrix).

  • s: Stacked aligned sketches for all subjects ((N*V_p) x k matrix).

  • sdev: Component standard deviations (vector of length k, currently defaults to 1s).

  • preproc: Preprocessing object (currently `prep(pass())`).

  • block_indices: List defining subject blocks in the `s` matrix.

  • R_final_list: List of subject-specific rotation matrices (k x k).

  • U_original_list: List of subject-specific original (unaligned) sketch matrices (V_p x k).

  • Lambda_original_list: List of subject-specific original eigenvalues (vector of length k) from the parcel-level decomposition. Essential for Nyström voxel projection.

  • T_anchor_final: The final group anchor template used for alignment (V_a x k matrix, where V_a is number of anchors).

  • parameters: List of input parameters used for the HATSA run (e.g., `k`, `V_p`, `N_subjects`, anchor details, sparsification parameters).

  • method: Character string, "hatsa_core".

  • U_aligned_list: (Note: while used to compute `v` and `s`, direct aligned sketches per subject are also stored if `project_block` needs them or for direct inspection, typically same as `object$s` reshaped per block)

This object can be used with S3 methods like `print`, `summary`, `coef`, `scores`, `predict` (for new parcel data), and `project_voxels` (for new voxel data).

Author

Expert R Developer (GPT)

Examples

# Generate example data
set.seed(123)
N_subjects <- 3 # Small N for quick example
V_p_parcels <- 25
T_times_avg <- 50

# Create a list of matrices (time x parcels)
subject_data <- lapply(1:N_subjects, function(i) {
  T_i <- T_times_avg + sample(-5:5, 1)
  matrix(stats::rnorm(T_i * V_p_parcels), nrow = T_i, ncol = V_p_parcels)
})

# Assign parcel names (optional but good practice)
parcel_names_vec <- paste0("Parcel_", 1:V_p_parcels)
subject_data <- lapply(subject_data, function(mat) {
  colnames(mat) <- parcel_names_vec
  mat
})

# Define HATSA parameters
# Ensure number of anchors >= k for stable Procrustes
anchor_idx <- sample(1:V_p_parcels, 7)
k_spectral <- 5 # k=5, num_anchors=7 is valid
k_pos <- 4
k_neg <- 2
n_iter_refine <- 2

# Run Core HATSA (requires Matrix and PRIMME packages)
hatsa_results <- NULL
if (requireNamespace("Matrix", quietly = TRUE) &&
    requireNamespace("PRIMME", quietly = TRUE)) {
  hatsa_results <- tryCatch(
    run_hatsa_core(
      subject_data_list = subject_data,
      anchor_indices = anchor_idx,
      spectral_rank_k = k_spectral,
      k_conn_pos = k_pos,
      k_conn_neg = k_neg,
      n_refine = n_iter_refine
    ),
    error = function(e) {
      message("Example run failed: ", e$message)
      NULL
    }
  )

  # Inspect the results object
  if (!is.null(hatsa_results)) {
    print(hatsa_results)
    summary_info <- summary(hatsa_results)
    print(summary_info)

    # Get coefficients (mean aligned sketch)
    group_template <- coef(hatsa_results)
    # print(dim(group_template)) # Should be V_p x k

    # Get stacked scores (aligned sketches for all subjects)
    all_scores <- scores(hatsa_results)
    # print(dim(all_scores)) # Should be (N*V_p) x k

    # Get block indices to map scores to subjects
    indices <- block_indices(hatsa_results)
    # subject1_scores <- all_scores[indices[[1]], ]
    # print(dim(subject1_scores)) # Should be V_p x k
  }
} else {
  if (interactive()) message("Matrix and PRIMME packages needed for this example.")
}
#> Proceeding to GPA with 3 subjects having valid anchor matrices (7 parcel + 0 task rows).
#> HATSA Projector Object
#> ----------------------
#> Method:  hatsa_core 
#> Number of Subjects (N):  3 
#> Number of Parcels (V_p):  25 
#> Number of Components (k):  5 
#> 
#> Key components:
#>   Mean Aligned Sketch (v): dimensions [ 25 x 5 ]
#>   Stacked Aligned Sketches (s): dimensions [ 75 x 5 ]
#>   Component Standard Deviations (sdev): length [ 5 ]
#>                          Length Class         Mode     
#> v                        125    -none-        numeric  
#> s                        375    -none-        numeric  
#> sdev                       5    -none-        numeric  
#> preproc                    4    pre_processor list     
#> block_indices              3    -none-        list     
#> R_final_list               3    -none-        list     
#> U_original_list            3    -none-        list     
#> Lambda_original_list       3    -none-        list     
#> T_anchor_final            35    -none-        numeric  
#> parameters                28    -none-        list     
#> method                     1    -none-        character
#> qc_metrics                 3    -none-        list     
#> anchor_augmentation_info   7    -none-        list     
#> gev_patch_data             0    -none-        NULL     
#> U_aligned_list             3    -none-        list     
#> W_task_list                1    -none-        list     
#> L_task_list                1    -none-        list     
#> W_hybrid_list              0    -none-        NULL     
#> L_hybrid_list              0    -none-        NULL     
#> U_task_list                0    -none-        NULL     
#> Lambda_task_list           0    -none-        NULL     
#> ._cache                    1    -none-        list     
#> Error in scores(hatsa_results): could not find function "scores"