• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 185
  • 14
  • 2
  • Tagged with
  • 223
  • 223
  • 223
  • 22
  • 21
  • 19
  • 19
  • 18
  • 17
  • 16
  • 16
  • 15
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Optimal mobility patterns in epidemic networks

Nirkhiwale, Supriya January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / Caterina M. Scoglio / Disruption Tolerant Networks or opportunistic networks represent a class of networks where there is no contemporaneous path from source to destination. In other words, these are networks with intermittent connections. These networks are generally sparse or highly mobile wireless networks. Each node has a limited radio range and the connections between nodes may be disrupted due to node movement, hostile environments or power sleep schedules, etc. A common example of such networks is a sensor network monitoring nature or military field or a herd of animals under study. Epidemic routing is a widely proposed routing mechanism for data propagation in these type of networks. According to this mechanism, the source copies its packets to all the nodes it meets in its radio range. These nodes in turn copy the received packets to the other nodes they meet and so on. The data to be transmitted travels in a way analogous to the spread of an infection in a biological network. The destination finally receives the packet and measures are taken to eradicate the packet from the network. The task of routing in epidemic networks faces certain difficulties involving minimizing the delivery delay with a reduced consumption of resources. Every node has severe power constraints and the network is also susceptible to temporary but random failure of nodes. In the previous work, the parameter of mobility has been considered a constant for a certain setting. In our setting, we consider a varying parameter of mobility. In this framework, we determine the optimal mobility pattern and a forwarding policy that a network should follow in order to meet the trade-off between delivery delay and power consumption. In addition, the mobility pattern should be such that it can be practically incorporated. In our work, we formulate an optimization problem which is solved by using the principles of dynamic programming. We have tested the optimal algorithm through extensive simulations and they show that this optimization problem has a global solution.
202

Generalized and multiple-trait extensions to Quantitative-Trait Locus mapping

Joehanes, Roby January 1900 (has links)
Doctor of Philosophy / Genetics Interdepartmental Program / James C. Nelson / QTL (quantitative-trait locus) analysis aims to locate and estimate the effects of genes that are responsible for quantitative traits, by means of statistical methods that evaluate the association of genetic variation with trait (phenotypic) variation. Quantitative traits are typically controlled by multiple genes with varying degrees of influence on the phenotype. I describe a new QTL analysis method based on shrinkage and a unifying framework based on the generalized linear model for non-normal data. I develop their extensions to multiple-trait QTL analysis. Expression QTL, or eQTL, analysis is QTL analysis applied to gene expression data to reveal the eQTLs controlling transcript-abundance variation, with the goal of elucidating gene regulatory networks. For exploiting eQTL data, I develop a novel extension of the graphical Gaussian model that produces an undirected graph of a gene regulatory network. To reduce the dimensionality, the extension constructs networks one cluster at a time. However, because Fuzzy-K, the clustering method of choice, relies on subjective visual cutoffs for cluster membership, I develop a bootstrap method to overcome this disadvantage. Finally, I describe QGene, an extensible QTL- and eQTL-analysis software platform written in Java and used for implementation of all analyses.
203

Vehicle highway automation

Challa, Dinesh Kumar January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Gurdip Singh / Vehicle Highway Automation has been studied for several years but a practical system has not been possible because of technology limitations. New advances in sensing and communication technology have now brought a realistic system within reach. This paper proposes a Co-Operative Vehicle Highway Automation System for automating traffic information gathering and decision making in a vehicle on a highway which is cost effective and near to real life implementation. Co-Operative Vehicle-Highway Automation System is the system which is implemented by technology on-board a vehicle and also on the intelligent infrastructure technology along a highway. Vehicle Automation, Collision Prevention and Avoidance, Route Guidance, Highway Information System, Vehicle tracking, and Traffic surveillance are some applications which can be implemented in the Co-Operative Vehicle-Highway Automation System. Implementing Vehicle Highway Automation System will provide an ameliorated level of road transportation. The possible benefits to society and individuals are many in terms of time, safety, comfort and overall travel quality.
204

Data aggregation in sensor networks

Kallumadi, Surya Teja January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Gurdip Singh / Severe energy constraints and limited computing abilities of the nodes in a network present a major challenge in the design and deployment of a wireless sensor network. This thesis aims to present energy efficient algorithms for data fusion and information aggregation in a sensor network. The various methodologies of data fusion presented in this thesis intend to reduce the data traffic within a network by mapping the sensor network application task graph onto a sensor network topology. Partitioning of an application into sub-tasks that can be mapped onto the nodes of a sensor network offers opportunities to reduce the overall energy consumption of a sensor network. The first approach proposes a grid based coordinated incremental data fusion and routing with heterogeneous nodes of varied computational abilities. In this approach high performance nodes arranged in a mesh like structure spanning the network topology, are present amongst the resource constrained nodes. The sensor network protocol performance, measured in terms of hop-count is analysed for various grid sizes of the high performance nodes. To reduce network traffic and increase the energy efficiency in a randomly deployed sensor network, distributed clustering strategies which consider network density and structure similarity are applied on the network topology. The clustering methods aim to improve the energy efficiency of the sensor network by dividing the network into logical clusters and mapping the fusion points onto the clusters. Routing of network information is performed by inter-cluster and intra-cluster routing.
205

Exploring transcription patterns and regulatory motifs in Arabidopsis thaliana

Bahirwani, Vishal January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Doina Caragea / Recent work has shown that bidirectional genes (genes located on opposite strands of DNA, whose transcription start sites are not more than 1000 basepairs apart) are often co-expressed and have similar biological functions. Identification of such genes can be useful in the process of constructing gene regulatory networks. Furthermore, analysis of the intergenic regions corresponding to bidirectional genes can help to identify regulatory elements, such as transcription factor binding sites. Approximately 2500 bidirectional gene pairs have been identified in Arabidopsis thaliana and the corresponding intergenic regions have been shown to be rich in regulatory elements that are essential for the initiation of transcription. Identifying such elements is especially important, as simply searching for known transcription factor binding sites in the promoter of a gene can result in many hits that are not always important for transcription initiation. Encouraged by the findings about the presence of essential regulatory elements in the intergenic regions corresponding to bidirectional genes, in this thesis, we explore a motif-based machine learning approach to identify intergenic regulatory elements. More precisely, we consider the problem of predicting the transcription pattern for pairs of consecutive genes in Arabidopsis thaliana using motifs from AthaMap and PLACE. We use machine learning algorithms to learn models that can predict the direction of transcription for pairs of consecutive genes. To identify the most predictive motifs and, therefore, the most significant regulatory elements, we perform feature selection based on mutual information and feature abstraction based on family or sequence similarity. Preliminary results demonstrate the feasibility of our approach.
206

Neighborhood-Oriented feature selection and classification of Duke’s stages on colorectal Cancer using high density genomic data.

Peng, Liang January 1900 (has links)
Master of Science / Department of Statistics / Haiyan Wang / The selection of relevant genes for classification of phenotypes for diseases with gene expression data have been extensively studied. Previously, most relevant gene selection was conducted on individual gene with limited sample size. Modern technology makes it possible to obtain microarray data with higher resolution of the chromosomes. Considering gene sets on an entire block of a chromosome rather than individual gene could help to reveal important connection of relevant genes with the disease phenotypes. In this report, we consider feature selection and classification while taking into account of the spatial location of probe sets in classification of Duke’s stages B and C using DNA copy number data or gene expression data from colorectal cancers. A novel method was presented for feature selection in this report. A chromosome was first partitioned into blocks after the probe sets were aligned along their chromosome locations. Then a test of interaction between Duke’s stage and probe sets was conducted on each block of probe sets to select significant blocks. For each significant block, a new multiple comparison procedure was carried out to identify truly relevant probe sets while preserving the neighborhood location information of the probe sets. Support Vector Machine (SVM) and K-Nearest Neighbor (KNN) classification using the selected final probe sets was conducted for all samples. Leave-One-Out Cross Validation (LOOCV) estimate of accuracy is reported as an evaluation of selected features. We applied the method on two large data sets, each containing more than 50,000 features. Excellent classification accuracy was achieved by the proposed procedure along with SVM or KNN for both data sets even though classification of prognosis stages (Duke’s stages B and C) is much more difficult than that for the normal or tumor types.
207

MASSPEC: multiagent system specification through policy exploration and checking

Harmon, Scott J. January 1900 (has links)
Doctor of Philosophy / Department of Computing and Information Sciences / Scott A. DeLoach / Multiagent systems have been proposed as a way to create reliable, adaptable, and efficient systems. As these systems grow in complexity, configuration, tuning, and design of these systems can become as complex as the problems they claim to solve. As researchers in multiagent systems engineering, we must create the next generation of theories and tools to help tame this growing complexity and take some of the burden off the systems engineer. In this thesis, I propose guidance policies as a way to do just that. I also give a framework for multiagent system design, using the concept of guidance policies to automatically generate a set of constraints based on a set of multiagent system models as well as provide an implementation for generating code that will conform to these constraints. Presenting a formal definition for guidance policies, I show how they can be used in a machine learning context to improve performance of a system and avoid failures. I also give a practical demonstration of converting abstract requirements to concrete system requirements (with respect to a given set of design models).
208

Rapid development of mobile apps using App Inventor and AGCO API

Kepley, Spencer January 1900 (has links)
Master of Science / Department of Biological & Agricultural Engineering / Naiqian Zhang / Mobile apps are useful tools for many different purposes. In agriculture, apps can be used to check the weather and markets, control irrigation, and monitor machine activity among other uses. This research project is a collaboration between Kansas State University and AGCO and includes the development of two apps, using MIT Application Inventor and Google App Engine. Kansas State University was responsible for developing the apps user interface and functionality while AGCO provide the data needs for the apps through Google App Engine. The first app is called Crop Maturity App and measures Growing Degree Days from a crops planting date. The second app is called Combine Efficiency App and determines the performance of a combine harvesting based on its speed. AGCO provided the server support for these apps from a weather service and their own combines that are connected. This project demonstrates the possibility of an open-source development environment with AGCO machine data.
209

Handling uncertainty in intrusion analysis

Zomlot, Loai M. M. January 1900 (has links)
Doctor of Philosophy / Department of Computing and Information Sciences / Xinming Ou / Intrusion analysis, i.e., the process of combing through Intrusion Detection System (IDS) alerts and audit logs to identify true successful and attempted attacks, remains a difficult problem in practical network security defense. The primary cause of this problem is the high false positive rate in IDS system sensors used to detect malicious activity. This high false positive rate is attributed to an inability to differentiate nearly certain attacks from those that are merely possible. This inefficacy has created high uncertainty in intrusion analysis and consequently causing an overwhelming amount of work for security analysts. As a solution, practitioners typically resort to a specific IDS-rules set that precisely captures specific attacks. However, this results in failure to discern other forms of the targeted attack because an attack’s polymorphism reflects human intelligence. Alternatively, the addition of generic rules so that an activity with remote indication of an attack will trigger an alert, requires the security analyst to discern true alerts from a multitude of false alerts, thus perpetuating the original problem. The perpetuity of this trade-off issue is a dilemma that has puzzled the cyber-security community for years. A solution to this dilemma includes reducing uncertainty in intrusion analysis by making IDS-nearly-certain alerts prominently discernible. Therefore, I propose alerts prioritization, which can be attained by integrating multiple methods. I use IDS alerts correlation by building attack scenarios in a ground-up manner. In addition, I use Dempster-Shafer Theory (DST), a non-traditional theory to quantify uncertainty, and I propose a new method for fusing non-independent alerts in an attack scenario. Finally, I propose usage of semi-supervised learning to capture an organization’s contextual knowledge, consequently improving prioritization. Evaluation of these approaches was conducted using multiple datasets. Evaluation results strongly indicate that the ranking provided by the approaches gives good prioritization of IDS alerts based on their likelihood of indicating true attacks.
210

Semi-supervised and transductive learning algorithms for predicting alternative splicing events in genes.

Tangirala, Karthik January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Doina Caragea / As genomes are sequenced, a major challenge is their annotation -- the identification of genes and regulatory elements, their locations and their functions. For years, it was believed that one gene corresponds to one protein, but the discovery of alternative splicing provided a mechanism for generating different gene transcripts (isoforms) from the same genomic sequence. In the recent years, it has become obvious that a large fraction of genes undergoes alternative splicing. Thus, understanding alternative splicing is a problem of great interest to biologists. Supervised machine learning approaches can be used to predict alternative splicing events at genome level. However, supervised approaches require large amounts of labeled data to produce accurate classifiers. While large amounts of genomic data are produced by the new sequencing technologies, labeling these data can be costly and time consuming. Therefore, semi-supervised learning approaches that can make use of large amounts of unlabeled data, in addition to small amounts of labeled data are highly desirable. In this work, we study the usefulness of a semi-supervised learning approach, co-training, for classifying exons as alternatively spliced or constitutive. The co-training algorithm makes use of two views of the data to iteratively learn two classifiers that can inform each other, at each step, with their best predictions on the unlabeled data. We consider three sets of features for constructing views for the problem of predicting alternatively spliced exons: lengths of the exon of interest and its flanking introns, exonic splicing enhancers (a.k.a., ESE motifs) and intronic regulatory sequences (a.k.a., IRS motifs). Naive Bayes and Support Vector Machine (SVM) algorithms are used as based classifiers in our study. Experimental results show that the usage of the unlabeled data can result in better classifiers as compared to those obtained from the small amount of labeled data alone. In addition to semi-supervised approaches, we also also study the usefulness of graph based transductive learning approaches for predicting alternatively spliced exons. Similar to the semi-supervised learning algorithms, transductive learning algorithms can make use of unlabeled data, together with labeled data, to produce labels for the unlabeled data. However, a classification model that could be used to classify new unlabeled data is not learned in this case. Experimental results show that graph based transductive approaches can make effective use of the unlabeled data.

Page generated in 0.3418 seconds