Jie Chen, Lei Nie, Shiowjen Lee, Haitao Chu, Haijun Tian, Yan Wang, Weili He, Thomas Jemielita, Susan Gruber, Yang Song, Roy Tamura, Lu Tian, Yihua Zhao, Yong Chen, Mark van der Laan, Hana Lee Oct 10 2024
stat.AP arXiv:2410.06585v1
Developing drugs for rare diseases presents unique challenges from a statistical perspective. These challenges may include slowly progressive diseases with unmet medical needs, poorly understood natural history, small population size, diversified phenotypes and geneotypes within a disorder, and lack of appropriate surrogate endpoints to measure clinical benefits. The Real-World Evidence (RWE) Scientific Working Group of the American Statistical Association Biopharmaceutical Section has assembled a research team to assess the landscape including challenges and possible strategies to address these challenges and the role of real-world data (RWD) and RWE in rare disease drug development. This paper first reviews the current regulations by regulatory agencies worldwide and then discusses in more details the challenges from a statistical perspective in the design, conduct, and analysis of rare disease clinical trials. After outlining an overall development pathway for rare disease drugs, corresponding strategies to address the aforementioned challenges are presented. Other considerations are also discussed for generating relevant evidence for regulatory decision-making on drugs for rare diseases. The accompanying paper discusses how RWD and RWE can be used to improve the efficiency of rare disease drug development.
Jie Chen, Susan Gruber, Hana Lee, Haitao Chu, Shiowjen Lee, Haijun Tian, Yan Wang, Weili He, Thomas Jemielita, Yang Song, Roy Tamura, Lu Tian, Yihua Zhao, Yong Chen, Mark van der Laan, Lei Nie Oct 10 2024
stat.AP arXiv:2410.06586v1
Real-world data (RWD) and real-world evidence (RWE) have been increasingly used in medical product development and regulatory decision-making, especially for rare diseases. After outlining the challenges and possible strategies to address the challenges in rare disease drug development (see the accompanying paper), the Real-World Evidence (RWE) Scientific Working Group of the American Statistical Association Biopharmaceutical Section reviews the roles of RWD and RWE in clinical trials for drugs treating rare diseases. This paper summarizes relevant guidance documents and frameworks by selected regulatory agencies and the current practice on the use of RWD and RWE in natural history studies and the design, conduct, and analysis of rare disease clinical trials. A targeted learning roadmap for rare disease trials is described, followed by case studies on the use of RWD and RWE to support a natural history study and marketing applications in various settings.
Jie Chen, Junrui Di, Nadia Daizadeh, Ying Lu, Hongwei Wang, Yuan-Li Shen, Jennifer Kirk, Frank W. Rockhold, Herbert Pang, Jing Zhao, Weili He, Andrew Potter, Hana Lee Oct 10 2024
stat.AP arXiv:2410.06591v1
There has been a growing trend that activities relating to clinical trials take place at locations other than traditional trial sites (hence decentralized clinical trials or DCTs), some of which are at settings of real-world clinical practice. Although there are numerous benefits of DCTs, this also brings some implications on a number of issues relating to the design, conduct, and analysis of DCTs. The Real-World Evidence Scientific Working Group of the American Statistical Association Biopharmaceutical Section has been reviewing the field of DCTs and provides in this paper considerations for decentralized trials from a statistical perspective. This paper first discusses selected critical decentralized elements that may have statistical implications on the trial and then summarizes regulatory guidance, framework, and initiatives on DCTs. More discussions are presented by focusing on the design (including construction of estimand), implementation, statistical analysis plan (including missing data handling), and reporting of safety events. Some additional considerations (e.g., ethical considerations, technology infrastructure, study oversight, data security and privacy, and regulatory compliance) are also briefly discussed. This paper is intended to provide statistical considerations for decentralized trials of medical products to support regulatory decision-making.
Aug 02 2024
stat.ME arXiv:2408.00177v1
Correlated survival data are prevalent in various clinical settings and have been extensively discussed in literature. One of the most common types of correlated survival data is clustered survival data, where the survival times from individuals in a cluster are associated. Our study is motivated by invasive mechanical ventilation data from different intensive care units (ICUs) in Ontario, Canada, forming multiple clusters. The survival times from patients within the same ICU cluster are correlated. To address this association, we introduce a shared frailty log-logistic accelerated failure time model that accounts for intra-cluster correlation through a cluster-specific random intercept. We present a novel, fast variational Bayes (VB) algorithm for parameter inference and evaluate its performance using simulation studies varying the number of clusters and their sizes. We further compare the performance of our proposed VB algorithm with the h-likelihood method and a Markov Chain Monte Carlo (MCMC) algorithm. The proposed algorithm delivers satisfactory results and demonstrates computational efficiency over the MCMC algorithm. We apply our method to the ICU ventilation data from Ontario to investigate the ICU site random effect on ventilation duration.
Apr 09 2024
stat.ME arXiv:2404.04696v1
Dynamic treatment regimes (DTRs) have received an increasing interest in recent years. DTRs are sequences of treatment decision rules tailored to patient-level information. The main goal of the DTR study is to identify an optimal DTR, a sequence of treatment decision rules that yields the best expected clinical outcome. Q-learning has been considered as one of the most popular regression-based methods to estimate the optimal DTR. However, it is rarely studied in an error-prone setting, where the patient information is contaminated with measurement error. In this paper, we study the effect of covariate measurement error on Q-learning and propose a correction method to correct the measurement error in Q-learning. Simulation studies are conducted to assess the performance of the proposed method in Q-learning. We illustrate the use of the proposed method in an application to the sequenced treatment alternatives to relieve depression data.
Apr 09 2024
stat.ME arXiv:2404.04697v1
The study of precision medicine involves dynamic treatment regimes (DTRs), which are sequences of treatment decision rules recommended by taking patient-level information as input. The primary goal of the DTR study is to identify an optimal DTR, a sequence of treatment decision rules that leads to the best expected clinical outcome. Statistical methods have been developed in recent years to estimate an optimal DTR, including Q-learning, a regression-based method in the DTR literature. Although there are many studies concerning Q-learning, little attention has been given in the presence of noisy data, such as misclassified outcomes. In this paper, we investigate the effect of outcome misclassification on Q-learning and propose a correction method to accommodate the misclassification effect. Simulation studies are conducted to demonstrate the satisfactory performance of the proposed method. We illustrate the proposed method in two examples from the National Health and Nutrition Examination Survey Data I Epidemiologic Follow-up Study and the smoking cessation program.
We consider the basic statistical problem of detecting truncation of the uniform distribution on the Boolean hypercube by juntas. More concretely, we give upper and lower bounds on the problem of distinguishing between i.i.d. sample access to either (a) the uniform distribution over $\{0,1\}^n$, or (b) the uniform distribution over $\{0,1\}^n$ conditioned on the satisfying assignments of a $k$-junta $f: \{0,1\}^n\to\{0,1\}$. We show that (up to constant factors) $\min\{2^k + \log{n\choose k}, {2^{k/2}\log^{1/2}{n\choose k}}\}$ samples suffice for this task and also show that a $\log{n\choose k}$ dependence on sample complexity is unavoidable. Our results suggest that testing junta truncation requires learning the set of relevant variables of the junta.
Graphical models have exhibited their performance in numerous tasks ranging from biological analysis to recommender systems. However, graphical models with hub nodes are computationally difficult to fit, particularly when the dimension of the data is large. To efficiently estimate the hub graphical models, we introduce a two-phase algorithm. The proposed algorithm first generates a good initial point via a dual alternating direction method of multipliers (ADMM), and then warm starts a semismooth Newton (SSN) based augmented Lagrangian method (ALM) to compute a solution that is accurate enough for practical tasks. The sparsity structure of the generalized Jacobian ensures that the algorithm can obtain a nice solution very efficiently. Comprehensive experiments on both synthetic data and real data show that it obviously outperforms the existing state-of-the-art algorithms. In particular, in some high dimensional tasks, it can save more than 70\% of the execution time, meanwhile still achieves a high-quality estimation.
Jul 04 2023
stat.AP arXiv:2307.00190v1
A Real-World Evidence (RWE) Scientific Working Group (SWG) of the American Statistical Association Biopharmaceutical Section (ASA BIOP) has been reviewing statistical considerations for the generation of RWE to support regulatory decision-making. As part of the effort, the working group is addressing estimands in RWE studies. Constructing the right estimand -- the target of estimation -- which reflects the research question and the study objective, is one of the key components in formulating a clinical study. ICH E9(R1) describes statistical principles for constructing estimands in clinical trials with a focus on five attributes -- population, treatment, endpoints, intercurrent events, and population-level summary. However, defining estimands for clinical studies using real-world data (RWD), i.e., RWE studies, requires additional considerations due to, for example, heterogeneity of study population, complexity of treatment regimes, different types and patterns of intercurrent events, and complexities in choosing study endpoints. This paper reviews the essential components of estimands and causal inference framework, discusses considerations in constructing estimands for RWE studies, highlights similarities and differences in traditional clinical trial and RWE study estimands, and provides a roadmap for choosing appropriate estimands for RWE studies.
The brain continually reorganizes its functional network to adapt to post-stroke functional impairments. Previous studies using static modularity analysis have presented global-level behavior patterns of this network reorganization. However, it is far from understood how the brain reconfigures its functional network dynamically following a stroke. This study collected resting-state functional MRI data from 15 stroke patients, with mild (n = 6) and severe (n = 9) two subgroups based on their clinical symptoms. Additionally, 15 age-matched healthy subjects were considered as controls. By applying a multilayer network method, a dynamic modular structure was recognized based on a time-resolved function network. Then dynamic network measurements (recruitment, integration, and flexibility) were calculated to characterize the dynamic reconfiguration of post-stroke brain functional networks, hence, to reveal the neural functional rebuilding process. It was found from this investigation that severe patients tended to have reduced recruitment and increased between-network integration, while mild patients exhibited low network flexibility and less network integration. It is also noted that this severity-dependent alteration in network interaction was not able to be revealed by previous studies using static methods. Clinically, the obtained knowledge of the diverse patterns of dynamic adjustment in brain functional networks observed from the brain signal could help understand the underlying mechanism of the motor, speech, and cognitive functional impairments caused by stroke attacks. The proposed method not only could be used to evaluate patients' current brain status but also has the potential to provide insights into prognosis analysis and prediction.
Mar 20 2023
stat.ME arXiv:2303.09598v2
The log-logistic regression model is one of the most commonly used accelerated failure time (AFT) models in survival analysis, for which statistical inference methods are mainly established under the frequentist framework. Recently, Bayesian inference for log-logistic AFT models using Markov chain Monte Carlo (MCMC) techniques has also been widely developed. In this work, we develop an alternative approach to MCMC methods and infer the parameters of the log-logistic AFT model via a mean-field variational Bayes (VB) algorithm. A piecewise approximation technique is embedded in deriving the VB algorithm to achieve conjugacy. The proposed VB algorithm is evaluated and compared with typical frequentist inferences and MCMC inference using simulated data under various scenarios. A publicly available dataset is employed for illustration. We demonstrate that the proposed VB algorithm can achieve good estimation accuracy and has a lower computational cost compared with MCMC methods.
Deep neural networks (DNNs) have achieved tremendous success in making accurate predictions for computer vision, natural language processing, as well as science and engineering domains. However, it is also well-recognized that DNNs sometimes make unexpected, incorrect, but overconfident predictions. This can cause serious consequences in high-stake applications, such as autonomous driving, medical diagnosis, and disaster response. Uncertainty quantification (UQ) aims to estimate the confidence of DNN predictions beyond prediction accuracy. In recent years, many UQ methods have been developed for DNNs. It is of great practical value to systematically categorize these UQ methods and compare their advantages and disadvantages. However, existing surveys mostly focus on categorizing UQ methodologies from a neural network architecture perspective or a Bayesian perspective and ignore the source of uncertainty that each methodology can incorporate, making it difficult to select an appropriate UQ method in practice. To fill the gap, this paper presents a systematic taxonomy of UQ methods for DNNs based on the types of uncertainty sources (data uncertainty versus model uncertainty). We summarize the advantages and disadvantages of methods in each category. We show how our taxonomy of UQ methodologies can potentially help guide the choice of UQ method in different machine learning problems (e.g., active learning, robustness, and reinforcement learning). We also identify current research gaps and propose several future research directions.
The human severe acute respiratory syndrome coronavirus 2 (SARS-Cov-2), causing the COVID-19 disease, has continued to spread all over the world. It menacingly affects not only public health and global economics but also mental health and mood. While the impact of the COVID-19 pandemic has been widely studied, relatively fewer discussions about the sentimental reaction of the population have been available. In this article, we scrape COVID-19 related tweets on the microblogging platform, Twitter, and examine the tweets from Feb~24, 2020 to Oct~14, 2020 in four Canadian cities (Toronto, Montreal, Vancouver, and Calgary) and four U.S. cities (New York, Los Angeles, Chicago, and Seattle). Applying the Vader and NRC approaches, we evaluate the sentiment intensity scores and visualize the information over different periods of the pandemic. Sentiment scores for the tweets concerning three anti-epidemic measures, masks, vaccine, and lockdown, are computed for comparisons. The results of four Canadian cities are compared with four cities in the United States. We study the causal relationships between the infected cases, the tweet activities, and the sentiment scores of COVID-19 related tweets, by integrating the echo state network method with convergent cross-mapping. Our analysis shows that public sentiments regarding COVID-19 vary in different time periods and locations. In general, people have a positive mood about COVID-19 and masks, but negative in the topics of vaccine and lockdown. The causal inference shows that the sentiment influences people's activities on Twitter, which is also correlated to the daily number of infections.
Semi-supervised learning aims to learn prediction models from both labeled and unlabeled samples. There has been extensive research in this area. Among existing work, generative mixture models with Expectation-Maximization (EM) is a popular method due to clear statistical properties. However, existing literature on EM-based semi-supervised learning largely focuses on unstructured prediction, assuming that samples are independent and identically distributed. Studies on EM-based semi-supervised approach in structured prediction is limited. This paper aims to fill the gap through a comparative study between unstructured and structured methods in EM-based semi-supervised learning. Specifically, we compare their theoretical properties and find that both methods can be considered as a generalization of self-training with soft class assignment of unlabeled samples, but the structured method additionally considers structural constraint in soft class assignment. We conducted a case study on real-world flood mapping datasets to compare the two methods. Results show that structured EM is more robust to class confusion caused by noise and obstacles in features in the context of the flood mapping application.
Apr 06 2020
stat.AP arXiv:2004.01252v1
Since the beginning of 2020, the coronavirus disease 2019 (COVID-19) has spread rapidly in the city of Wuhan, P.R. China, and subsequently, across the world. The swift spread of the virus is largely attributed to its stealth transmissions in which infected patients may be asymptomatic. Undetected transmissions present a remarkable challenge for the containment of the virus and pose an appalling threat to the public. An urgent question that has been asked by the public is "Should I be tested for COVID-19 if I am sick?". While different regions established their own criteria for screening infected cases, the screening criteria have been modified based on new evidence and understanding of the virus as well as the availability of resources. The shortage of test kits and medical personnel has considerably limited our ability to do as many tests as possible. Public health officials and clinicians are facing a dilemma of balancing the limited resources and unlimited demands. On one hand, they are striving to achieve the best outcome by optimizing the usage of the scant resources. On the other hand, they are challenged by the patients' frustrations and anxieties, stemming from the concerns of not being tested for COVID-19 for not meeting the definition of PUI (person under investigation). In this paper, we evaluate the situation from the statistical viewpoint by factoring into the considerations of the uncertainty and inaccuracy of the test, an issue that is often overlooked by the general public. We aim to shed light on the tough situation by providing evidence-based reasoning from the statistical angle, and we expect this examination will help the general public understand and assess the situation rationally. Most importantly, the development offers recommendations for physicians to make sensible evaluations to optimally use the limited resources for the best medical outcome.
Learning over massive data stored in different locations is essential in many real-world applications. However, sharing data is full of challenges due to the increasing demands of privacy and security with the growing use of smart mobile devices and IoT devices. Federated learning provides a potential solution to privacy-preserving and secure machine learning, by means of jointly training a global model without uploading data distributed on multiple devices to a central server. However, most existing work on federated learning adopts machine learning models with full-precision weights, and almost all these models contain a large number of redundant parameters that do not need to be transmitted to the server, consuming an excessive amount of communication costs. To address this issue, we propose a federated trained ternary quantization (FTTQ) algorithm, which optimizes the quantized networks on the clients through a self-learning quantization factor. Theoretical proofs of the convergence of quantization factors, unbiasedness of FTTQ, as well as a reduced weight divergence are given. On the basis of FTTQ, we propose a ternary federated averaging protocol (T-FedAvg) to reduce the upstream and downstream communication of federated learning systems. Empirical experiments are conducted to train widely used deep learning models on publicly available datasets, and our results demonstrate that the proposed T-FedAvg is effective in reducing communication costs and can even achieve slightly better performance on non-IID data in contrast to the canonical federated learning algorithms.
Aug 05 2019
stat.AP arXiv:1908.00654v1
In oncology clinical trials, characterizing the long-term overall survival (OS) benefit for an experimental drug or treatment regimen (experimental group) is often unobservable if some patients in the control group switch to drugs in the experimental group and/or other cancer treatments after disease progression. A key question often raised by payers and reimbursement agencies is how to estimate the true benefit of the experimental drug group on overall survival that would have been estimated if there were no treatment switches. Several commonly used statistical methods are available to estimate overall survival benefit while adjusting for treatment switching, ranging from naive exclusion or censoring approaches to more advanced methods including inverse probability of censoring weighting (IPCW), iterative parameter estimation (IPE) algorithm or rank-preserving structural failure time models (RPSFTM). However, many clinical trials now have patients switching to different treatment regimens other than the test drugs, and the existing methods cannot handle more complicated scenarios. To address this challenge, we propose two additional methods: stratified RPSFTM and random-forest-based prediction. A simulation study is conducted to assess the properties of the existing methods along with the two newly proposed approaches.
Xinlei Pan, Chaowei Xiao, Warren He, Shuang Yang, Jian Peng, Mingjie Sun, Jinfeng Yi, Zijiang Yang, Mingyan Liu, Bo Li, Dawn Song Recent studies show that Deep Reinforcement Learning (DRL) models are vulnerable to adversarial attacks, which attack DRL models by adding small perturbations to the observations. However, some attacks assume full availability of the victim model, and some require a huge amount of computation, making them less feasible for real world applications. In this work, we make further explorations of the vulnerabilities of DRL by studying other aspects of attacks on DRL using realistic and efficient attacks. First, we adapt and propose efficient black-box attacks when we do not have access to DRL model parameters. Second, to address the high computational demands of existing attacks, we introduce efficient online sequential attacks that exploit temporal consistency across consecutive steps. Third, we explore the possibility of an attacker perturbing other aspects in the DRL setting, such as the environment dynamics. Finally, to account for imperfections in how an attacker would inject perturbations in the physical world, we devise a method for generating a robust physical perturbations to be printed. The attack is evaluated on a real-world robot under various conditions. We conduct extensive experiments both in simulation such as Atari games, robotics and autonomous driving, and on real-world robotics, to compare the effectiveness of the proposed attacks with baseline approaches. To the best of our knowledge, we are the first to apply adversarial attacks on DRL systems to physical robots.
Oct 16 2018
stat.AP arXiv:1810.05967v2
Paleoclimate reconstruction on the Common Era (1-2000AD) provide critical context for recent warming trends. This work leverages integrated nested Laplace approximations (INLA) to conduct inference under a Bayesian hierarchical model using data from three sources: a state-of-the-art prox database (PAGES 2k), surface temperature observations (HadCRUT4), and latest estimates of external forcings. INLA's computational efficiency allows to explore several model formulations (with or without forcings, explicitly modeling internal variability or not), as well as five data reduction techniques. Two different validation exercises find a small impact of data reduction choices, but a large impact for model choice, with best results for the two models that incorporate external forcings. These models confirm that man-made greenhouse gas emissions are the largest contributor to temperature variability over the Common Era, followed by volcanic forcing. Solar effects are indistinguishable from zero. INLA provide an efficient way to estimate the posterior mean, comparable with the much costlier Monte Carlo Markov Chain procedure, but with wider uncertainty bounds. We recommend using it for exploration of model designs, but full MCMC solutions should be used for proper uncertainty quantification.
Xianyan Jia, Shutao Song, Wei He, Yangzihao Wang, Haidong Rong, Feihu Zhou, Liqiang Xie, Zhenyu Guo, Yuanzhou Yang, Liwei Yu, Tiegang Chen, Guangxiao Hu, Shaohuai Shi, Xiaowen Chu Synchronized stochastic gradient descent (SGD) optimizers with data parallelism are widely used in training large-scale deep neural networks. Although using larger mini-batch sizes can improve the system scalability by reducing the communication-to-computation ratio, it may hurt the generalization ability of the models. To this end, we build a highly scalable deep learning training system for dense GPU clusters with three main contributions: (1) We propose a mixed-precision training method that significantly improves the training throughput of a single GPU without losing accuracy. (2) We propose an optimization approach for extremely large mini-batch size (up to 64k) that can train CNN models on the ImageNet dataset without losing accuracy. (3) We propose highly optimized all-reduce algorithms that achieve up to 3x and 11x speedup on AlexNet and ResNet-50 respectively than NCCL-based training on a cluster with 1024 Tesla P40 GPUs. On training ResNet-50 with 90 epochs, the state-of-the-art GPU-based system with 1024 Tesla P100 GPUs spent 15 minutes and achieved 74.9\% top-1 test accuracy, and another KNL-based system with 2048 Intel KNLs spent 20 minutes and achieved 75.4\% accuracy. Our training system can achieve 75.8\% top-1 test accuracy in only 6.6 minutes using 2048 Tesla P40 GPUs. When training AlexNet with 95 epochs, our system can achieve 58.7\% top-1 test accuracy within 4 minutes, which also outperforms all other existing systems.
Recent studies show that widely used deep neural networks (DNNs) are vulnerable to carefully crafted adversarial examples. Many advanced algorithms have been proposed to generate adversarial examples by leveraging the $\mathcal{L}_p$ distance for penalizing perturbations. Researchers have explored different defense methods to defend against such adversarial attacks. While the effectiveness of $\mathcal{L}_p$ distance as a metric of perceptual quality remains an active research area, in this paper we will instead focus on a different type of perturbation, namely spatial transformation, as opposed to manipulating the pixel values directly as in prior works. Perturbations generated through spatial transformation could result in large $\mathcal{L}_p$ distance measures, but our extensive experiments show that such spatially transformed adversarial examples are perceptually realistic and more difficult to defend against with existing defense systems. This potentially provides a new direction in adversarial example generation and the design of corresponding defenses. We visualize the spatial transformation based perturbation for different examples and show that our technique can produce realistic adversarial examples with smooth image deformation. Finally, we visualize the attention of deep networks with different types of adversarial examples to better understand how these examples are interpreted.
Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high perceptual quality and more efficiently requires more research efforts. In this paper, we propose AdvGAN to generate adversarial examples with generative adversarial networks (GANs), which can learn and approximate the distribution of original instances. For AdvGAN, once the generator is trained, it can generate adversarial perturbations efficiently for any instance, so as to potentially accelerate adversarial training as defenses. We apply AdvGAN in both semi-whitebox and black-box attack settings. In semi-whitebox attacks, there is no need to access the original target model after the generator is trained, in contrast to traditional white-box attacks. In black-box attacks, we dynamically train a distilled model for the black-box model and optimize the generator accordingly. Adversarial examples generated by AdvGAN on different target models have high attack success rate under state-of-the-art defenses compared to other attacks. Our attack has placed the first with 92.76% accuracy on a public MNIST black-box attack challenge.