• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 448
  • 58
  • 38
  • 35
  • 24
  • 14
  • 11
  • 9
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 750
  • 750
  • 240
  • 179
  • 126
  • 118
  • 111
  • 110
  • 99
  • 93
  • 79
  • 65
  • 65
  • 65
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
471

Multi-layer syntactical model transformation for model based systems engineering

Kwon, Ky-Sang 03 November 2011 (has links)
This dissertation develops a new model transformation approach that supports engineering model integration, which is essential to support contemporary interdisciplinary system design processes. We extend traditional model transformation, which has been primarily used for software engineering, to enable model-based systems engineering (MBSE) so that the model transformation can handle more general engineering models. We identify two issues that arise when applying the traditional model transformation to general engineering modeling domains. The first is instance data integration: the traditional model transformation theory does not deal with instance data, which is essential for executing engineering models in engineering tools. The second is syntactical inconsistency: various engineering tools represent engineering models in a proprietary syntax. However, the traditional model transformation cannot handle this syntactic diversity. In order to address these two issues, we propose a new multi-layer syntactical model transformation approach. For the instance integration issue, this approach generates model transformation rules for instance data from the result of a model transformation that is developed for user model integration, which is the normal purpose of traditional model transformation. For the syntactical inconsistency issue, we introduce the concept of the complete meta-model for defining how to represent a model syntactically as well as semantically. Our approach addresses the syntactical inconsistency issue by generating necessary complete meta-models using a special type of model transformation.
472

Using Observers for Model Based Data Collection in Distributed Tactical Operations

Thorstensson, Mirko January 2008 (has links)
<p>Modern information technology increases the use of computers in training systems as well as in command-and-control systems in military services and public-safety organizations. This computerization combined with new threats present a challenging complexity. Situational awareness in evolving distributed operations and follow-up in training systems depends on humans in the field reporting observations of events. The use of this observer-reported information can be largely improved by implementation of models supporting both reporting and computer representation of objects and phenomena in operations.</p><p>This thesis characterises and describes observer model-based data collection in distributed tactical operations, where multiple, dispersed units work to achieve common goals. Reconstruction and exploration of multimedia representations of operations is becoming an established means for supporting taskforce training. We explore how modelling of operational processes and entities can support observer data collection and increase information content in mission histories. We use realistic exercises for testing developed models, methods and tools for observer data collection and transfer results to live operations.</p><p>The main contribution of this thesis is the systematic description of the model-based approach to using observers for data collection. Methodological aspects in using humans to collect data to be used in information systems, and also modelling aspects for phenomena occurring in emergency response and communication areas contribute to the body of research. We describe a general methodology for using human observers to collect adequate data for use in information systems. In addition, we describe methods and tools to collect data on the chain of medical attendance in emergency response exercises, and on command-and-control processes in several domains.</p>
473

動態模型演算法在100K SNP資料之模擬研究 / Dynamic Model Based Algorithm on 100K SNP Data:A Simulation Study

黃慧珍, Hui-Chen Huang Unknown Date (has links)
研究指出,在不同人類個體的DNA序列中,只有0.1%的基因組排列是相異的,而其餘的序列則是相同的。這些相異的基因組排列則被稱為單一核苷酸(SNP)。Affymetrix公司發展出一種DNA晶片技術稱之為Affymetrix GeneChip Mapping 100K SNP set,此晶片可用來決定單一核苷酸資料的基因類型(genotype)。Affymetrix公司採用預設「動態模型演算法」(DM)來決定基因型態。本論文的研究目的是探討與示範對於DM方法中預設的S值的四種修正方式。而這四種修正的方法分別是: (1) Standardized L value,(2) Median-polished L value,(3) Median-center L value,和(4) Median-standardized L value。為了比較S值與四種改進方法,本研究藉由SNP的模擬資料來進行比較。資料的模擬是基於利用改寫過的階層式之Bolstad模型(2004),而模擬模型的參數估計是利用華人細胞株及基因資料庫中95位台灣人的100K SNP資料。根據AA模型與AB模型模擬資料的基因型態正確率,Standardized L value是最好的判斷基因型態之方法。在另一方面,因為DM方法並不是設計來決定Null模型的基因型態,因此對於Null模型模擬資料的基因型態判斷會有問題。關於Null模型的基因型態判斷,本論文提供了一些簡短的討論與建議。然而,依然需要進一步的研究探討。 / It is known there is only 0.1% in the DNA sequences that is different among human beings, and the rest of them are the same. These differences in DNA sequences are defined as SNPs (Single Nucleotide Polymorphism). The Affymetrix, Inc. had developed a DNA chip technology called Affymetrix GeneChip Mapping 100K SNP set for SNP data used to determine the genotype call. The default algorithm applied by Affymetrix, Inc. to decide genotype calls is the Dynamic Model-based (DM) algorithm. This study aimed to investigate and demonstrate four different ways to modify the basic component used in DM algorithm, namely, the S value. These four modified methods include: (1) Standardized L value, (2) Median-polished L value, (3) Median-centered L value, and (4) Median-standardized L value. In order to compare the S value with the four modified L values, a simulation study was conducted. A hierarchical version of Bolstad’s model (2004) was adopted to simulate the SNP Data. The parameters for the simulation model were estimated based on 95 Taiwanese 100K SNPs data from Taiwan Han Chinese Cell and Genome bank. The Standardized L value was proven to be the best method based on the accuracy of the genotype calls determined according to the simulated data of AA model and AB model. On the other hand, the genotype call for simulated data under Null model is problematic since the DM approach is not designed to determine the Null model. We have given some brief discussion and remarks of the genotype call for Null model. However, further research is still needed. /
474

MobIS 2010 - Modellierung betrieblicher Informationssysteme, Modellgestütztes Management

10 December 2010 (has links) (PDF)
This volume contains contribution form the refereed “MobIS 2010” main program and selected papers of its tracks. The conference on information systems modeling was held in Dresden September 15-17, 2010. The guiding theme for MobIS 2010 focused on modeling topics between model-based management and component and service engineering.
475

Modeling and model based fault diagnosis of dry vacuum pumps in the semiconductor industry

Choi, Jae-Won, active 2013 11 February 2014 (has links)
Vacuum technology is ubiquitous in the high tech industries and scientific endeavors. Since vacuum pumps are critical to operation, semiconductor manufacturers desire reliable operations, ability to schedule downtime, and less costly maintenance services. To better cope with difficult maintenance issues, interests in novel fault diagnosis techniques are growing. This study concerns model based fault diagnosis and isolation (MB-FDI) of dry vacuum pumps in the semiconductor industry. Faults alter normal operation of a vacuum pump resulting in performance deviations, discovered by measurements. Simulations using an appropriate mathematical model with suitably chosen parameters can mimic faulty behavior. This research focuses on the construction of a detailed multi-stage dry vacuum pump model for MB-FDI, and the development of a simple and efficient FDI method to analyze common incipient faults such as particulate deposition and gas leak inside the pump. The pump model features 0-D thermo-fluid dynamics, scalable geometric representations of Roots blower, claw pumps and inter-stage port interfaces, a unified pipe model seamlessly connecting from free molecular to turbulent regimes, sophisticated internal leakage model considering true pump geometry and tribological aspects, and systematic assembly of a multi-stage configuration using single stage pump models. Design of a simple FDI technique for the dry vacuum pump includes staged fault simulations using faulty pump models, parametric study of faulty pump behaviors, and design of a health indicator based on classification. The main research contributions include the developments of an accurate multi-stage dry pump model with many features not found in existing pump models, and the design of a simple MB-FDI technique to detect and isolate the common faults found in dry vacuum pumps. The proposed dry pump model can pave the way for the future development of advanced MB-FDI methods, also performance improvement of existing dry vacuum pumps. The proposed fault classification charts can serve as a quick guideline for vacuum pump manufactures to isolate roots causes from faulty symptoms. / text
476

Fault detection and model-based diagnostics in nonlinear dynamic systems

Nakhaeinejad, Mohsen 09 February 2011 (has links)
Modeling, fault assessment, and diagnostics of rolling element bearings and induction motors were studied. Dynamic model of rolling element bearings with faults were developed using vector bond graphs. The model incorporates gyroscopic and centrifugal effects, contact deflections and forces, contact slip and separations, and localized faults. Dents and pits on inner race, outer race and balls were modeled through surface profile changes. Experiments with healthy and faulty bearings validated the model. Bearing load zones under various radial loads and clearances were simulated. The model was used to study dynamics of faulty bearings. Effects of type, size and shape of faults on the vibration response and on dynamics of contacts in presence of localized faults were studied. A signal processing algorithm, called feature plot, based on variable window averaging and time feature extraction was proposed for diagnostics of rolling element bearings. Conducting experiments, faults such as dents, pits, and rough surfaces on inner race, balls, and outer race were detected and isolated using the feature plot technique. Time features such as shape factor, skewness, Kurtosis, peak value, crest factor, impulse factor and mean absolute deviation were used in feature plots. Performance of feature plots in bearing fault detection when finite numbers of samples are available was shown. Results suggest that the feature plot technique can detect and isolate localized faults and rough surface defects in rolling element bearings. The proposed diagnostic algorithm has the potential for other applications such as gearbox. A model-based diagnostic framework consisting of modeling, non-linear observability analysis, and parameter tuning was developed for three-phase induction motors. A bond graph model was developed and verified with experiments. Nonlinear observability based on Lie derivatives identified the most observable configuration of sensors and parameters. Continuous-discrete Extended Kalman Filter (EKF) technique was used for parameter tuning to detect stator and rotor faults, bearing friction, and mechanical loads from currents and speed signals. A dynamic process noise technique based on the validation index was implemented for EKF. Complex step Jacobian technique improved computational performance of EKF and observability analysis. Results suggest that motor faults, bearing rotational friction, and mechanical load of induction motors can be detected using model-based diagnostics as long as the configuration of sensors and parameters is observable. / text
477

Test Modeling of Dynamic Variable Systems using Feature Petri Nets

Püschel, Georg, Seidl, Christoph, Neufert, Mathias, Gorzel, André, Aßmann, Uwe 08 November 2013 (has links) (PDF)
In order to generate substantial market impact, mobile applications must be able to run on multiple platforms. Hence, software engineers face a multitude of technologies and system versions resulting in static variability. Furthermore, due to the dependence on sensors and connectivity, mobile software has to adapt its behavior accordingly at runtime resulting in dynamic variability. However, software engineers need to assure quality of a mobile application even with this large amount of variability—in our approach by the use of model-based testing (i.e., the generation of test cases from models). Recent concepts of test metamodels cannot efficiently handle dynamic variability. To overcome this problem, we propose a process for creating black-box test models based on dynamic feature Petri nets, which allow the description of configuration-dependent behavior and reconfiguration. We use feature models to define variability in the system under test. Furthermore, we illustrate our approach by introducing an example translator application.
478

Adaptive radar detection in the presence of textured and discrete interference

Bang, Jeong Hwan 20 September 2013 (has links)
Under a number of practical operating scenarios, traditional moving target indicator (MTI) systems inadequately suppress ground clutter in airborne radar systems. Due to the moving platform, the clutter gains a nonzero relative velocity and spreads the power across Doppler frequencies. This obfuscates slow-moving targets of interest near the "direct current" component of the spectrum. In response, space-time adaptive processing (STAP) techniques have been developed that simultaneously operate in the space and time dimensions for effective clutter cancellation. STAP algorithms commonly operate under the assumption of homogeneous clutter, where the returns are described by complex, white Gaussian distributions. Empirical evidence shows that this assumption is invalid for many radar systems of interest, including high-resolution radar and radars operating at low grazing angles. We are interested in these heterogeneous cases, i.e., cases when the Gaussian model no longer suffices. Hence, the development of reliable STAP algorithms for real systems depends on the accuracy of the heterogeneous clutter models. The clutter of interest in this work includes heterogeneous texture clutter and point clutter. We have developed a cell-based clutter model (CCM) that provides simple, yet faithful means to simulate clutter scenarios for algorithm testing. The scene generated by the CMM can be tuned with two parameters, essentially describing the spikiness of the clutter scene. In one extreme, the texture resembles point clutter, generating strong returns from localized range-azimuth bins. On the other hand, our model can also simulate a flat, homogeneous environment. We prove the importance of model-based STAP techniques, namely knowledge-aided parametric covariance estimation (KAPE), in filtering a gamut of heterogeneous texture scenes. We demonstrate that the efficacy of KAPE does not diminish in the presence of typical spiky clutter. Computational complexities and susceptibility to modeling errors prohibit the use of KAPE in real systems. The computational complexity is a major concern, as the standard KAPE algorithm requires the inversion of an MNxMN matrix for each range bin, where M and N are the number of array elements and the number of pulses of the radar system, respectively. We developed a Gram Schmidt (GS) KAPE method that circumvents the need of a direct inversion and reduces the number of required power estimates. Another unavoidable concern is the performance degradations arising from uncalibrated array errors. This problem is exacerbated in KAPE, as it is a model-based technique; mismatched element amplitudes and phase errors amount to a modeling mismatch. We have developed the power-ridge aligning (PRA) calibration technique, a novel iterative gradient descent algorithm that outperforms current methods. We demonstrate the vast improvements attained using a combination of GS KAPE and PRA over the standard KAPE algorithm under various clutter scenarios in the presence of array errors.
479

Statinė CIL kodo analizė, remiantis simboliniu vykdymu / Static CIL code analysis using symbolic execution

Neverdauskas, Tomas 26 August 2010 (has links)
Programinės įrangos testavimas ir kokybės užtikrinimas yra svarbus programų sistemų inžinerijos kūrimo uždavinys, siekiant sukurti tinkamą naudojimui produktą. Yra daug skirtingų metodikų kuriamai programinei įrangai testuoti, tačiau vieningos sistemos, kuri būtų universali – nėra. Įvairūs tyrimai vykdomi programinės įrangos testavimo srityje duoda skirtingus rezultatus. Testavimo procesas taip pat svarbus ir praktikoje – be jo negali išsiversti nei vienas organizacija susijusi su programinės įrangos kūrimu ir plėtojimu. Šis darbas remiasi modeliu paremto testavimo paradigma ir simboliniu vykdymo metodika. Darbe apžvelgiamos teorinės simbolinio vykdymo galimybės, jo pritaikymas .Net platformoje ir papildomos priemonės, kurios reikalingos įgyvendinti tokią sistemą. Taip pat trumpai pristatomas magistro projektinis darbas, aprašomi sukurti inžinerinio produkto svarbiausi aspektai. Pagal teorinę medžiaga sukurtas simbolinio vykdymo variklis – Symex. Darbe nagrinėjamas praktinis tokio įrankio pritaikymas generuojant vienetų testus iš išeities kodo – eksperimentiškai tiriamos ir lyginamos simbolinio vykdymo ir atsitiktinių įėjimų vienetų testų kūrimo galimybės .Net platformoje. / Testing complex safety critical software always was difficult task. Development of automated techniques for error detection is even more difficult. Well known techniques for checking software are model checking static analysis and testing. Symbolic execution is a technique that is being used to improve security, to find bugs, and to help in debugging. A symbolic execution engine is basically an interpreter that figures out how to follow all paths in a program. It is a static code analysis technique. This work presents symbolic execution background, current state, analysis the possibilities of implementation on the .Net framework and platform. The work describes the master project – bug tracking software “Crunchbug” and the tool – Symex (symbolic execution engine) for .Net platform. Symex is white box model based automatic unit test generator and it is evaluated against two other tools – Microsoft Pex and framework that generates unit test inputs random. Detailed experiments made to cover symbolic execution possibilities with proprietary benchmarks and real code from the master project.
480

Test case generation using symbolic grammars and quasirandom sequences

Felix Reyes, Alejandro Unknown Date
No description available.

Page generated in 0.1843 seconds