Spelling suggestions: "subject:"algorithms -- 3research"" "subject:"algorithms -- 1research""
1 |
Performance analysis of EM-MPM and K-means clustering in 3D ultrasound breast image segmentationYang, Huanyi 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Mammographic density is an important risk factor for breast cancer, detecting and screening at an early stage could help save lives. To analyze breast density distribution, a good segmentation algorithm is needed. In this thesis, we compared two popularly used segmentation algorithms, EM-MPM and K-means Clustering. We applied them on twenty cases of synthetic phantom ultrasound tomography (UST), and nine cases of clinical mammogram and UST images. From the synthetic phantom segmentation comparison we found that EM-MPM performs better than K-means Clustering on segmentation accuracy, because the segmentation result fits the ground truth data very well (with superior Tanimoto Coefficient and Parenchyma Percentage). The EM-MPM is able to use a Bayesian prior assumption, which takes advantage of the 3D structure and finds a better localized segmentation. EM-MPM performs significantly better for the highly dense tissue scattered within low density tissue and for volumes with low contrast between high and low density tissues. For the clinical mammogram, image segmentation comparison shows again that EM-MPM outperforms K-means Clustering since it identifies the dense tissue more clearly and accurately than K-means. The superior EM-MPM results shown in this study presents a promising future application to the density proportion and potential cancer risk evaluation.
|
2 |
Mining Biomedical Literature to Extract Pharmacokinetic Drug-Drug InteractionsKarnik, Shreyas 03 February 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Polypharmacy is a general clinical practice, there is a high chance that multiple administered drugs will interfere with each other, such phenomenon is called drug-drug interaction (DDI). DDI occurs when drugs administered change each other's pharmacokinetic (PK) or pharmacodynamic (PD) response. DDIs in many ways affect the overall effectiveness of the drug or at some times pose a risk of serious side effects to the patients thus, it becomes very challenging to for the successful drug development and clinical patient care. Biomedical literature is rich source for in-vitro and in-vivo DDI reports and there is growing need to automated methods to extract the DDI related information from unstructured text. In this work we present an ontology (PK ontology), which defines annotation guidelines for annotation of PK DDI studies. Using the ontology we have put together a corpora of PK DDI studies, which serves as excellent resource for training machine learning, based DDI extraction algorithms. Finally we demonstrate the use of PK ontology and corpora for extracting PK DDIs from biomedical literature using machine learning algorithms.
|
3 |
Characterizing software components using evolutionary testing and path-guided analysisMcNeany, Scott Edward 16 December 2013 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Evolutionary testing (ET) techniques (e.g., mutation, crossover, and natural selection) have been applied successfully to many areas of software engineering, such as error/fault identification, data mining, and software cost estimation. Previous research has also applied ET techniques to performance testing. Its application to performance testing, however, only goes as far as finding the best and worst case, execution times. Although such performance testing is beneficial, it provides little insight into performance characteristics of complex functions with multiple branches. This thesis therefore provides two contributions towards performance testing of software systems. First, this thesis demonstrates how ET and genetic algorithms (GAs), which are search heuristic mechanisms for solving optimization problems using mutation, crossover, and natural selection, can be combined with a constraint solver to target specific paths in the software. Secondly, this thesis demonstrates how such an approach can identify local minima and maxima execution times, which can provide a more detailed characterization of software performance. The results from applying our approach to example software applications show that it is able to characterize different execution paths in relatively short amounts of time. This thesis also examines a modified exhaustive approach which can be plugged in when the constraint solver cannot properly provide the information needed to target specific paths.
|
4 |
Modeling, monitoring and optimization of discrete event systems using Petri netsYan, Jiaxiang 29 January 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Yan, Jiaxiang. M.S.E.C.E., Purdue University, May 2013. Modeling, Monitoring and Optimization of Discrete Event Systems Using Petri Nets. Major Professor: Lingxi Li. In last decades, the research of discrete event systems (DESs) has attracts more and more attention because of the fast development of intelligent control strategies. Such control measures combine the conventional control strategies with discrete
decision-making processes which simulate human decision-making processes. Due to the scale and complexity of common DESs, the dedicated models, monitoring methods and optimal control strategies for them are necessary. Among various DES models, Petri nets are famous for the advantage in dealing with asynchronous processes. They have been widely applied in intelligent transportation systems (ITS) and communication technology in recent years. With encoding of the Petri net state, we can also
enable fault detection and identification capability in DESs and mitigate potential human
errors. This thesis studies various problems in the context of DESs that can be modeled by Petri nets. In particular, we focus on systematic modeling, asynchronous monitoring and optimal control strategies design of Petri nets. This thesis starts by looking at the systematic modeling of ITS. A microscopic
model of signalized intersection and its two-layer timed Petri net representation is
proposed in this thesis, where the first layer is the representation of the intersection
and the second layer is the representation of the traffic light system. Deterministic and
stochastic transitions are both involved in such Petri net representation. The detailed
operation process of such Petri net representation is stated. The improvement of such Petri net representation is also provided with comparison to previous models. Then we study the asynchronous monitoring of sensor networks. An event sequence reconstruction algorithm for a given sensor network based on asynchronous observations of its state changes is proposed in this thesis. We assume that the sensor network is modeled as a Petri net and the asynchronous observations are in the
form of state (token) changes at different places in the Petri net. More specifically,
the observed sequences of state changes are provided by local sensors and are asynchronous,
i.e., they only contain partial information about the ordering of the state changes that occur. We propose an approach that is able to partition the given net into several subnets and reconstruct the event sequence for each subnet. Then we develop an algorithm that is able to reconstruct the event sequences for the entire net that are consistent with: 1) the asynchronous observations of state changes; 2)
the event sequences of each subnet; and 3) the structure of the given Petri net. We discuss the algorithmic complexity. The final problem studied in this thesis is the optimal design method of Petri net controllers with fault-tolerant ability. In particular, we consider multiple faults detection and identification in Petri nets that have state machine structures (i.e., every transition in the net has only one input place and one output place). We develop the approximation algorithms to design the fault-tolerant Petri net controller which achieves the minimal number of connections with the original controller. A design example for an automated guided vehicle (AGV) system is also provided to illustrate our approaches.
|
5 |
Parallel acceleration of deadlock detection and avoidance algorithms on GPUsAbell, Stephen W. 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Current mainstream computing systems have become increasingly complex. Most of which have Central Processing Units (CPUs) that invoke multiple threads for their computing tasks. The growing issue with these systems is resource contention and with resource contention comes the risk of encountering a deadlock status in the system. Various software and hardware approaches exist that implement deadlock detection/avoidance techniques; however, they lack either the speed or problem size capability needed for real-time systems. The research conducted for this thesis aims to resolve issues present in past approaches by converging the two platforms (software and hardware) by means of the Graphics Processing Unit (GPU). Presented in this thesis are two GPU-based deadlock detection algorithms and one GPU-based deadlock avoidance algorithm. These GPU-based algorithms are: (i) GPU-OSDDA: A GPU-based Single Unit Resource Deadlock Detection Algorithm, (ii) GPU-LMDDA: A GPU-based Multi-Unit Resource Deadlock Detection Algorithm, and (iii) GPU-PBA: A GPU-based Deadlock Avoidance Algorithm. Both GPU-OSDDA and GPU-LMDDA utilize the Resource Allocation Graph (RAG) to represent resource allocation status in the system. However, the RAG is represented using integer-length bit-vectors. The advantages brought forth by this approach are plenty: (i) less memory required for algorithm matrices, (ii) 32 computations performed per instruction (in most cases), and (iii) allows our algorithms to handle large numbers of processes and resources. The deadlock detection algorithms also require minimal interaction with the CPU by implementing matrix storage and algorithm computations on the GPU, thus providing an interactive service type of behavior. As a result of this approach, both algorithms were able to achieve speedups over two orders of magnitude higher than their serial CPU implementations (3.17-317.42x for GPU-OSDDA and 37.17-812.50x for GPU-LMDDA). Lastly, GPU-PBA is the first parallel deadlock avoidance algorithm implemented on the GPU. While it does not achieve two orders of magnitude speedup over its CPU implementation, it does provide a platform for future deadlock avoidance research for the GPU.
|
6 |
Identification and mechanistic investigation of clinically important myopathic drug-drug interactionsHan, Xu January 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Drug-drug interactions (DDIs) refer to situations where one drug affects the pharmacokinetics or pharmacodynamics of another. DDIs represent a major cause of morbidity and mortality. A common adverse drug reaction (ADR) that can result from, or be exacerbated by DDIs is drug-induced myopathy. Identifying DDIs and understanding their underlying mechanisms is key to the prevention of undesirable effects of DDIs and to efforts to optimize therapeutic outcomes. This dissertation is dedicated to identification of clinically important myopathic DDIs and to elucidation of their underlying mechanisms. Using data mined from the published cytochrome P450 (CYP) drug interaction literature, 13,197 drug pairs were predicted to potentially interact by pairing a substrate and an inhibitor of a major CYP isoform in humans. Prescribing data for these drug pairs and their associations with myopathy were then examined in a large electronic medical record database. The analyses identified fifteen drug pairs as DDIs significantly associated with an increased risk of myopathy. These significant myopathic DDIs involved clinically important drugs including alprazolam, chloroquine, duloxetine, hydroxychloroquine, loratadine, omeprazole, promethazine, quetiapine, risperidone, ropinirole, trazodone and simvastatin. Data from in vitro experiments indicated that the interaction between quetiapine and chloroquine (risk ratio, RR, 2.17, p-value 5.29E-05) may result from the inhibitory effects of quetiapine on chloroquine metabolism by cytochrome P450s (CYPs). The in vitro data also suggested that the interaction between simvastatin and loratadine (RR 1.6, p-value 4.75E-07) may result from synergistic toxicity of simvastatin and desloratadine, the major metabolite of loratadine, to muscle cells, and from the inhibitory effect of simvastatin acid, the active metabolite of simvastatin, on the hepatic uptake of desloratadine via OATP1B1/1B3. Our data not only identified unknown myopathic DDIs of clinical consequence, but also shed light on their underlying pharmacokinetic and pharmacodynamic mechanisms. More importantly, our approach exemplified a new strategy for identification and investigation of DDIs, one that combined literature mining using bioinformatic algorithms, ADR detection using a pharmacoepidemiologic design, and mechanistic studies employing in vitro experimental models.
|
7 |
Variable selection and structural discovery in joint models of longitudinal and survival dataHe, Zangdong January 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Joint models of longitudinal and survival outcomes have been used with increasing frequency in clinical investigations. Correct specification of fixed and random effects, as well as their functional forms is essential for practical data analysis. However, no existing methods have been developed to meet this need in a joint model setting. In this dissertation, I describe a penalized likelihood-based method with adaptive least absolute shrinkage and selection operator (ALASSO) penalty functions for model selection. By reparameterizing variance components through a Cholesky decomposition, I introduce a penalty function of group shrinkage; the penalized likelihood is approximated by Gaussian quadrature and optimized by an EM algorithm. The functional forms of the independent effects are determined through a procedure for structural discovery. Specifically, I first construct the model by penalized cubic B-spline and then decompose the B-spline to linear and nonlinear elements by spectral decomposition. The decomposition represents the model in a mixed-effects model format, and I then use the mixed-effects variable selection method to perform structural discovery. Simulation studies show excellent performance. A clinical application is described to illustrate the use of the proposed methods, and the analytical results demonstrate the usefulness of the methods.
|
8 |
A scalable approach to processing adaptive optics optical coherence tomography data from multiple sensors using multiple graphics processing unitsKriske, Jeffery Edward, Jr. 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Adaptive optics-optical coherence tomography (AO-OCT) is a non-invasive method of imaging the human retina in vivo. It can be used to visualize microscopic structures, making it incredibly useful for the early detection and diagnosis of retinal disease. The research group at Indiana University has a novel multi-camera AO-OCT system capable of 1 MHz acquisition rates. Until this point, a method has not existed to process data from such a novel system quickly and accurately enough on a CPU, a GPU, or one that can scale to multiple GPUs automatically in an efficient manner. This is a barrier to using a MHz AO-OCT system in a clinical environment. A novel approach to processing AO-OCT data from the unique multi-camera optics system is tested on multiple graphics processing units (GPUs) in parallel with one, two, and four camera combinations. The design and results demonstrate a scalable, reusable, extensible method of computing AO-OCT output. This approach can either achieve real time results with an AO-OCT system capable of 1 MHz acquisition rates or be scaled to a higher accuracy mode with a fast Fourier transform of 16,384 complex values.
|
9 |
Multivariate semiparametric regression models for longitudinal dataLi, Zhuokai January 2014 (has links)
Multiple-outcome longitudinal data are abundant in clinical investigations. For example, infections with different pathogenic organisms are often tested concurrently, and assessments are usually taken repeatedly over time. It is therefore natural to consider a multivariate modeling approach to accommodate the underlying interrelationship among the multiple longitudinally measured outcomes. This dissertation proposes a multivariate semiparametric modeling framework for such data. Relevant estimation and inference procedures as well as model selection tools are discussed within this modeling framework. The first part of this research focuses on the analytical issues concerning binary data. The second part extends the binary model to a more general situation for data from the exponential family of distributions. The proposed model accounts for the correlations across the outcomes as well as the temporal dependency among the repeated measures of each outcome within an individual. An important feature of the proposed model is the addition of a bivariate smooth function for the depiction of concurrent nonlinear and possibly interacting influences of two independent variables on each outcome. For model implementation, a general approach for parameter estimation is developed by using the maximum penalized likelihood method. For statistical inference, a likelihood-based resampling procedure is proposed to compare the bivariate nonlinear effect surfaces across the outcomes. The final part of the dissertation presents a variable selection tool to facilitate model development in practical data analysis. Using the adaptive least absolute shrinkage and selection operator (LASSO) penalty, the variable selection tool simultaneously identifies important fixed effects and random effects, determines the correlation structure of the outcomes, and selects the interaction effects in the bivariate smooth functions. Model selection and estimation are performed through a two-stage procedure based on an expectation-maximization (EM) algorithm. Simulation studies are conducted to evaluate the performance of the proposed methods. The utility of the methods is demonstrated through several clinical applications.
|
10 |
Silent speech recognition in EEG-based brain computer interfaceGhane, Parisa January 2015 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / A Brain Computer Interface (BCI) is a hardware and software system that establishes direct communication between human brain and the environment. In a BCI system, brain messages pass through wires and external computers instead of the normal pathway of nerves and muscles. General work ow in all BCIs is to measure brain activities, process and then convert them into an output readable for a computer.
The measurement of electrical activities in different parts of the brain is called electroencephalography (EEG). There are lots of sensor technologies with different number of electrodes to record brain activities along the scalp. Each of these electrodes captures a weighted sum of activities of all neurons in the area around that electrode.
In order to establish a BCI system, it is needed to set a bunch of electrodes on scalp, and a tool to send the signals to a computer for training a system that can find the important information, extract them from the raw signal, and use them to recognize the user's intention. After all, a control signal should be generated based on the application.
This thesis describes the step by step training and testing a BCI system that can be used for a person who has lost speaking skills through an accident or surgery, but still has healthy brain tissues. The goal is to establish an algorithm, which recognizes different vowels from EEG signals. It considers a bandpass filter to remove signals' noise and artifacts, periodogram for feature extraction, and Support Vector Machine (SVM) for classification.
|
Page generated in 0.0562 seconds