• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • 9
  • 7
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 72
  • 72
  • 15
  • 14
  • 12
  • 12
  • 12
  • 11
  • 11
  • 10
  • 10
  • 10
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Dynamic Neural Network-based Adaptive Inverse Optimal Control Design

Alhejji, Ayman Khalid 01 August 2014 (has links)
This dissertation introduces a Dynamical Neural Network (DNN) model based adaptive inverse optimal control design for a class of nonlinear systems. A DNN structure is developed and stabilized based on a control Lyapunov function (CLF). The CLF must satisfy the partial Hamilton Jacobi-Bellman (HJB) equation to solve the cost function in order to prove the optimality. In other words, the control design is derived from the CLF and inversely achieves optimality when the given cost function variables are determined posterior. All the stability of the closed loop system is ensured using the Lyapunov-based analysis. In addition to structure stability, uncertainty/ disturbance presents a problem to a DNN in that it could degrade the system performance. Therefore, the DNN needs a robust control against uncertainty. Sliding mode control (SMC) is added to nominal control design based CLF in order to stabilize and counteract the effects of disturbance from uncertain DNN, also to achieve global asymptotic stability. In the next section, a DNN observer is considered for estimating states of a class of controllable and observable nonlinear systems. A DNN observer-based adaptive inverse optimal control (AIOC) is needed. With weight adaptations, an adaptive technique is introduced in the observer design and its stabilizing control. The AIOC is designed to control a DNN observer and nonlinear system simultaneously while the weight parameters are updated online. This control scheme guarantees the quality of a DNN's state and minimizes the cost function. In addition, a tracking problem is investigated. An inverse optimal adaptive tracking control based on a DNN observer for unknown nonlinear systems is proposed. Within this framework, a time-varying desired trajectory is investigated, which generates a desired trajectory based on the external inputs. The tracking control design forces system states to follow the desired trajectory, while the DNN observer estimates the states and identifies unknown system dynamics. The stability method based on Lyapunov-based analysis is guaranteed a global asymptotic stability. Numerical examples and simulation studies are presented and shown for each section to validate the effectiveness of the proposed methods.
22

Providing Adaptability in Survivable Systems through Situation Awareness

Öster, Daniel January 2006 (has links)
System integration, interoperability, just in time delivery, window of opportunity, and dust-to-dust optimization are all keywords of our computerized future. Survivability is an important concept that together with dependability and quality of service are key issues in the systems of the future, i.e. infrastructural systems, business applications, and everyday desktop applications. The importance of dependable systems and the widely spread usage of dependable system together with the complexity of those systems makes middleware and frameworks for survivability imperative to the system builder of the future. This thesis presents a simulation approach to investigate the effect on data survival when the defending system uses knowledge of the current situation to protect the data. The results show the importance of situation awareness to avoid wasting recourses. A number of characteristics of the situational information provided and how this information may be used to optimize the system.
23

Model Reference Learning Control Using ANFIS

Guruprasad, K R 12 1900 (has links) (PDF)
No description available.
24

Computational Framework for Uncertainty Quantification, Sensitivity Analysis and Experimental Design of Network-based Computer Simulation Models

Wu, Sichao 29 August 2017 (has links)
When capturing a real-world, networked system using a simulation model, features are usually omitted or represented by probability distributions. Verification and validation (V and V) of such models is an inherent and fundamental challenge. Central to V and V, but also to model analysis and prediction, are uncertainty quantification (UQ), sensitivity analysis (SA) and design of experiments (DOE). In addition, network-based computer simulation models, as compared with models based on ordinary and partial differential equations (ODE and PDE), typically involve a significantly larger volume of more complex data. Efficient use of such models is challenging since it requires a broad set of skills ranging from domain expertise to in-depth knowledge including modeling, programming, algorithmics, high- performance computing, statistical analysis, and optimization. On top of this, the need to support reproducible experiments necessitates complete data tracking and management. Finally, the lack of standardization of simulation model configuration formats presents an extra challenge when developing technology intended to work across models. While there are tools and frameworks that address parts of the challenges above, to the best of our knowledge, none of them accomplishes all this in a model-independent and scientifically reproducible manner. In this dissertation, we present a computational framework called GENEUS that addresses these challenges. Specifically, it incorporates (i) a standardized model configuration format, (ii) a data flow management system with digital library functions helping to ensure scientific reproducibility, and (iii) a model-independent, expandable plugin-type library for efficiently conducting UQ/SA/DOE for network-based simulation models. This framework has been applied to systems ranging from fundamental graph dynamical systems (GDSs) to large-scale socio-technical simulation models with a broad range of analyses such as UQ and parameter studies for various scenarios. Graph dynamical systems provide a theoretical framework for network-based simulation models and have been studied theoretically in this dissertation. This includes a broad range of stability and sensitivity analyses offering insights into how GDSs respond to perturbations of their key components. This stability-focused, structure-to-function theory was a motivator for the design and implementation of GENEUS. GENEUS, rooted in the framework of GDS, provides modelers, experimentalists, and research groups access to a variety of UQ/SA/DOE methods with robust and tested implementations without requiring them to necessarily have the detailed expertise in statistics, data management and computing. Even for research teams having all the skills, GENEUS can significantly increase research productivity. / Ph. D.
25

DNS Traffic Analysis for Network-based Malware Detection

Vu Hong, Linh January 2012 (has links)
Botnets are generally recognized as one of the most challenging threats on the Internet today. Botnets have been involved in many attacks targeting multinational organizations and even nationwide internet services. As more effective detection and mitigation approaches are proposed by security researchers, botnet developers are employing new techniques for evasion. It is not surprising that the Domain Name System (DNS) is abused by botnets for the purposes of evasion, because of the important role of DNS in the operation of the Internet. DNS provides a flexible mapping between domain names and IP addresses, thus botnets can exploit this dynamic mapping to mask the location of botnet controllers. Domain-flux and fast-flux (also known as IP-flux) are two emerging techniques which aim at exhausting the tracking and blacklisting effort of botnet defenders by rapidly changing the domain names or their associated IP addresses that are used by the botnet. In this thesis, we employ passive DNS analysis to develop an anomaly-based technique for detecting the presence of a domain-flux or fast- flux botnet in a network. To do this, we construct a lookup graph and a failure graph from captured DNS traffic and decompose these graphs into clusters which have a strong correlation between their domains, hosts, and IP addresses. DNS related features are extracted for each cluster and used as input to a classication module to identify the presence of a domain-flux or fast-flux botnet in the network. The experimental evaluation on captured traffic traces veried that the proposed technique successfully detected domain-flux botnets in the traces. The proposed technique complements other techniques for detecting botnets through traffic analysis. / Botnets betraktas som ett av de svåraste Internet-hoten idag. Botnets har använts vid många attacker mot multinationella organisationer och även nationella myndigheters och andra nationella Internet-tjänster. Allt eftersom mer effektiva detekterings - och skyddstekniker tas fram av säkerhetsforskare, har utvecklarna av botnets tagit fram nya tekniker för att undvika upptäckt. Därför är det inte förvånande att domännamnssystemet (Domain Name System, DNS) missbrukas av botnets för att undvika upptäckt, på grund av den viktiga roll domännamnssystemet har för Internets funktion - DNS ger en flexibel bindning mellan domännamn och IP-adresser. Domain-flux och fast-flux (även kallat IP-flux) är två relativt nya tekniker som används för att undvika spårning och svartlistning av IP-adresser av botnet-skyddsmekanismer genom att snabbt förändra bindningen mellan namn och IP-adresser som används av botnets. I denna rapport används passiv DNS-analys för att utveckla en anomali-baserad teknik för detektering av botnets som använder sig av domain-flux eller fast-flux. Tekniken baseras på skapandet av en uppslagnings-graf och en fel-graf från insamlad DNS-traffik och bryter ned dessa grafer i kluster som har stark korrelation mellan de ingående domänerna, maskinerna, och IP-adresserna. DNSrelaterade egenskaper extraheras för varje kluster och används som indata till en klassifficeringsmodul för identiffiering av domain-flux och fast-flux botnets i nätet. Utvärdering av metoden genom experiment på insamlade traffikspår visar att den föreslagna tekniken lyckas upptäcka domain-flux botnets i traffiken. Genom att fokusera på DNS-information kompletterar den föreslagna tekniken andra tekniker för detektering av botnets genom traffikanalys.
26

High-Dimensional Statistical Inference from Coarse and Nonlinear Data: Algorithms and Guarantees

Fu, Haoyu January 2019 (has links)
No description available.
27

Intelligent Data Mining on Large-scale Heterogeneous Datasets and its Application in Computational Biology

Wu, Chao 10 October 2014 (has links)
No description available.
28

Development of a neural network based software package for the automatic recognition of license plate characters

Chen, Songqing January 1992 (has links)
No description available.
29

From Correlation to Causality: Does Network Information improve Cancer Outcome Prediction?

Roy, Janine 10 July 2014 (has links) (PDF)
Motivation: Disease progression in cancer can vary substantially between patients. Yet, patients often receive the same treatment. Recently, there has been much work on predicting disease progression and patient outcome variables from gene expression in order to personalize treatment options. A widely used approach is high-throughput experiments that aim to explore predictive signature genes which would provide identification of clinical outcome of diseases. Microarray data analysis helps to reveal underlying biological mechanisms of tumor progression, metastasis, and drug-resistance in cancer studies. Despite first diagnostic kits in the market, there are open problems such as the choice of random gene signatures or noisy expression data. The experimental or computational noise in data and limited tissue samples collected from patients might furthermore reduce the predictive power and biological interpretability of such signature genes. Nevertheless, signature genes predicted by different studies generally represent poor similarity; even for the same type of cancer. Integration of network information with gene expression data could provide more efficient signatures for outcome prediction in cancer studies. One approach to deal with these problems employs gene-gene relationships and ranks genes using the random surfer model of Google's PageRank algorithm. Unfortunately, the majority of published network-based approaches solely tested their methods on a small amount of datasets, questioning the general applicability of network-based methods for outcome prediction. Methods: In this thesis, I provide a comprehensive and systematically evaluation of a network-based outcome prediction approach -- NetRank - a PageRank derivative -- applied on several types of gene expression cancer data and four different types of networks. The algorithm identifies a signature gene set for a specific cancer type by incorporating gene network information with given expression data. To assess the performance of NetRank, I created a benchmark dataset collection comprising 25 cancer outcome prediction datasets from literature and one in-house dataset. Results: NetRank performs significantly better than classical methods such as foldchange or t-test as it improves the prediction performance in average for 7%. Besides, we are approaching the accuracy level of the authors' signatures by applying a relatively unbiased but fully automated process for biomarker discovery. Despite an order of magnitude difference in network size, a regulatory, a protein-protein interaction and two predicted networks perform equally well. Signatures as published by the authors and the signatures generated with classical methods do not overlap -- not even for the same cancer type -- whereas the network-based signatures strongly overlap. I analyze and discuss these overlapping genes in terms of the Hallmarks of cancer and in particular single out six transcription factors and seven proteins and discuss their specific role in cancer progression. Furthermore several tests are conducted for the identification of a Universal Cancer Signature. No Universal Cancer Signature could be identified so far, but a cancer-specific combination of general master regulators with specific cancer genes could be discovered that achieves the best results for all cancer types. As NetRank offers a great value for cancer outcome prediction, first steps for a secure usage of NetRank in a public cloud are described. Conclusion: Experimental evaluation of network-based methods on a gene expression benchmark dataset suggests that these methods are especially suited for outcome prediction as they overcome the problems of random gene signatures and noisy expression data. Through the combination of network information with gene expression data, network-based methods identify highly similar signatures over all cancer types, in contrast to classical methods that fail to identify highly common gene sets across the same cancer types. In general allows the integration of additional information in gene expression analysis the identification of more reliable, accurate and reproducible biomarkers and provides a deeper understanding of processes occurring in cancer development and progression.
30

TENA in a Telemetry Network System

Saylor, Kase J., Malatesta, William A., Abbott, Ben A. 10 1900 (has links)
ITC/USA 2008 Conference Proceedings / The Forty-Fourth Annual International Telemetering Conference and Technical Exhibition / October 27-30, 2008 / Town and Country Resort & Convention Center, San Diego, California / The integrated Network Enhanced Telemetry (iNET) and Test and Training Enabling Architecture (TENA) projects are working to understand how TENA will perform in a Telemetry Network System. This paper discusses a demonstration prototype that is being used to investigate the use of TENA across a constrained test environment simulating iNET capabilities. Some of the key elements being evaluated are throughput, latency, memory utilization, memory footprint, and bandwidth. The results of these evaluations will be presented. Additionally, the paper briefly discusses modeling and metadata requirements for TENA and iNET.

Page generated in 0.047 seconds