March 25th, 2026

A targeted review to inform the Cerebro’s Pediatric Autism ASDQ Program

Literature Review

Cerebro Research Unit (CRu) Open Modules

Abstract

The Autism Symptom Dimensions Questionnaire (ASDQ) was introduced as a free, open-source, caregiver-report measure intended to capture both broad and specific dimensions of autism symptoms across childhood and adolescence. The current evidence base supports ASDQ most strongly as a platform for screening, structured symptom characterization, cohort enrichment, and longitudinal monitoring, rather than as a stand-alone diagnostic substitute. The 2023 development study established a 39-item instrument with a general autism factor and nine specific symptom factors, good measurement invariance, and preliminary screening utility [1]. A 2025 replication and extension study in larger samples showed replicable factor structure, strong validity, good screening efficiency, good short- and long-term test-retest stability, and potential to detect reliable change, while also showing weak diagnostic efficiency in referred clinical contexts [2]. Early cross-cultural validation in China retained the 39-item, 9-factor structure and found strong reliability and screening accuracy [3]. Adjacent literature clarifies the product opportunity around Cerebro ASDQ. First, dimensional models of autism symptoms are better aligned with contemporary psychopathology structure than binary symptom summaries alone [4]. Second, caregiver-report measures can be clinically useful for repeated outcome tracking when they are purpose-built for sensitivity to change [5,6]. Third, digital deployment of autism screening instruments improves workflow fidelity and reduces operational loss, but questionnaire-only screening remains limited in real-world diagnostic performance and may show subgroup disparities [7,8]. Finally, multimodal digital phenotyping studies demonstrate that objective behavioral features obtained from mobile devices can achieve strong accuracy and clinically meaningful correspondence with standardized developmental and autism measures, especially when combined with caregiver-report information [9-11]. Taken together, the literature supports Cerebro ASDQ as a clinically credible digital phenotyping and monitoring layer with the strongest near-term claim set in screening, symptom profiling, and longitudinal measurement, and with the clearest AI roadmap in combining ASDQ with objective behavioral features rather than positioning ASDQ alone as an autonomous diagnostic system.

Keywords: autism spectrum disorder; ASDQ; psychometrics; digital phenotyping; caregiver-report measurement; machine learning; longitudinal monitoring

Literature selection note. This is a targeted narrative review, not a formal systematic review. The anchor ASDQ publication and 10 additional peer-reviewed papers were selected from PubMed and PubMed Central for direct relevance to (1) ASDQ psychometrics and transportability, (2) dimensional autism measurement and change-sensitive caregiver-report instruments, (3) digital implementation of autism screening, and (4) multimodal digital phenotyping and AI-enabled assessment.

Introduction

Autism assessment remains constrained by long wait times, cost, uneven access to specialists, and the inherent heterogeneity of the phenotype. For digital-health products in this area, the central scientific question is not whether a questionnaire can replace expert diagnosis, but whether a digital measure can reliably support triage, characterize symptom dimensions in a clinically interpretable way, and generate structured longitudinal data suitable for decision support and research. Within that framing, ASDQ is a notable development because it was designed as an open-source instrument with explicit dimensional coverage of autism symptomatology rather than as a proprietary black-box screener [1,2].


Image: Cerebro pediatric autism ADSQ web application - https://cerebroasdq.com/

Image: Cerebro pediatric autism ADSQ web application - https://cerebroasdq.com/


The broader conceptual rationale for preserving dimensional outputs is strong. In a large latent-variable analysis of 14,744 siblings in the Interactive Autism Network, a hybrid structure consisting of an ASD versus non-ASD category plus two symptom dimensions—social communication/interaction and restricted/repetitive behavior—fit better than purely categorical or purely dimensional alternatives [4]. For Cerebro ASDQ, this argues for retaining subdomain-level representation rather than reducing the instrument to a single binary screen. For neuroscience stakeholders, that choice increases phenotype resolution for downstream association studies and stratification. For AI stakeholders, it creates a more informative feature space for multimodal modeling than a one-score architecture.

ASDQ as an open, dimensional measurement backbone

The anchor ASDQ study evaluated an expanded 39-item version in 1,467 children and adolescents aged 2 to 17 years, including 104 autistic participants. Factor analyses identified a general ASD factor and nine specific symptom factors with good measurement invariance across demographic groups, and the resulting scales showed good-to-excellent overall and conditional reliability [1]. Exploratory predictive analyses suggested useful screening performance in population and at-risk contexts, which supports the instrument's use as a structured intake and triage layer rather than only as a research questionnaire [1].

The 2025 follow-up materially strengthened the evidence base. Across two new samples totaling 3,366 youth aged 2 to 17 years, including 1,399 autistic participants, the ASDQ showed replicable factor structure, strong convergent and discriminant validity, and good screening efficiency [2]. Critically, this study added measurement properties that matter for clinical operations and research follow-up: approximately 4-month test-retest stability was good to excellent, approximately 18-month stability remained adequate to good, and reliable-change indices suggested that medium-to-large score shifts can be interpreted as real symptom change rather than noise [2]. At the same time, the authors explicitly reported weak diagnostic efficiency in referred clinical settings and concluded that optimal use is in screening and detailed characterization, with potential for monitoring response to intervention [2]. That conclusion is exactly the right evidentiary boundary for Cerebro ASDQ.