Spelling suggestions: "subject:"modelbased approach"" "subject:"model.based approach""
1 |
Metoda převažování (kalibrace) ve výběrových šetřeních / The method of re-weighting (calibration) in survey samplingMichálková, Anna January 2019 (has links)
In this thesis, we study re-weighting when estimating totals in survey sampling. The purpose of re-weighting is to adjust the structure of the sample in order to comply with the structure of the population (with respect to given auxiliary variables). We sum up some known results for methods of the traditional desin-based approach, more attention is given to the model-based approach. We generalize known asymptotic results in the model-based theory to a wider class of weighted estimators. Further, we propose a consistent estimator of asymptotic variance, which takes into consideration weights used in estimator of the total. This is in contrast to usually recommended variance estimators derived from the design-based approach. Moreover, the estimator is robust againts particular model misspecifications. In a simulation study, we investigate how the proposed estimator behaves in comparison with variance estimators which are usually recommended in the literature or used in practice. 1
|
2 |
Model-based Approach To The Federation Object Model Independence ProblemUluat, Mehmet Fatih 01 August 2007 (has links) (PDF)
One of the promises of High Level Architecture (HLA) is the reusability of
simulation components. Although HLA supports reusability to some extent with
mechanisms provided by Object Model Template (OMT), when the developer
wants to use an existing federate application within another federation with a
different Federation Object Model (FOM) problem arises. She usually has to
modify the federate code and rebuilt it. There have been some attempts to solve
this problem and they, in fact, accomplish this to some extent but usually they fall
short of providing flexible but also a complete mapping mechanism. In this work,
a model based approach that mainly focuses on Declaration, Object and
Federation Management services is explored. The proposed approach makes use
of Model Integrated Computing (MIC) and .NET 2.0 technologies by grouping
federate transitioning activities into three well-defined phases, namely, modeling,
automatic code generation and component generation. As a side product, a .NET
2.0 wrapper to Runtime Infrastructure (RTI) has been developed to help
developers create IEEE 1516 compatible .NET 2.0 federates in a programming
language independent way.
|
3 |
Reverse Engineering End-user Developed Web Applications into a Model-based FrameworkBhardwaj, Yogita 16 June 2005 (has links)
The main goal of this research is to facilitate end-user and expert developer collaboration in the creation of a web application. This research created a reverse engineering toolset and integrated it with Click (Component-based Lightweight Internet-application Construction Kit), an end-user web development tool. The toolset generates artifacts to facilitate collaboration between end-users and expert web developers when the end-users need to go beyond the limited capabilities of Click. By supporting smooth transition of workflow to expert web developers, we can help them in implementing advanced functionality in end-user developed web applications. The four artifacts generated include a sitemap, text documentation, a task model, and a canonical representation of the user interface. The sitemap is automatically generated to support the workflow of web developers. The text documentation of a web application is generated to document data representation and business logic. A task model, expressed using ConcurTaskTrees notation, covers the whole interaction specified by the end-user. A presentation and dialog model, represented in User Interface Markup Language (UIML), describe the user interface in a declarative language. The task model and UIML representation are created to support development of multi-platform user interfaces from an end-user web application. A formative evaluation of the usability of these models and representations with experienced web developers revealed that these representations were useful and easy to understand. / Master of Science
|
4 |
Comparing Model-based and Design-based Structural Equation Modeling Approaches in Analyzing Complex Survey DataWu, Jiun-Yu 2010 August 1900 (has links)
Conventional statistical methods assuming data sampled under simple random sampling are inadequate for use on complex survey data with a multilevel structure and non-independent observations. In structural equation modeling (SEM) framework, a researcher can either use the ad-hoc robust sandwich standard error estimators to correct the standard error estimates (Design-based approach) or perform multilevel analysis to model the multilevel data structure (Model-based approach) to analyze dependent data.
In a cross-sectional setting, the first study aims to examine the differences between the design-based single-level confirmatory factor analysis (CFA) and the model-based multilevel CFA for model fit test statistics/fit indices, and estimates of the fixed and random effects with corresponding statistical inference when analyzing multilevel data. Several design factors were considered, including: cluster number, cluster size, intra-class correlation, and the structure equality of the between-/within-level models. The performance of a maximum modeling strategy with the saturated higher-level and true lower-level model was also examined. Simulation study showed that the design-based approach provided adequate results only under equal between/within structures. However, in the unequal between/within structure scenarios, the design-based approach produced biased fixed and random effect estimates. Maximum modeling generated consistent and unbiased within-level model parameter estimates across three different scenarios.
Multilevel latent growth curve modeling (MLGCM) is a versatile tool to analyze the repeated measure sampled from a multi-stage sampling. However, researchers often adopt latent growth curve models (LGCM) without considering the multilevel structure. This second study examined the influences of different model specifications on the model fit test statistics/fit indices, between/within-level regression coefficient and random effect estimates and mean structures. Simulation suggested that design-based MLGCM incorporating the higher-level covariates produces consistent parameter estimates and statistical inferences comparable to those from the model-based MLGCM and maintain adequate statistical power even with small cluster number.
|
5 |
Approaches For Automatic Urban Building Extraction And Updating From High Resolution Satellite ImageryKoc San, Dilek 01 March 2009 (has links) (PDF)
Approaches were developed for building extraction and updating from high resolution satellite imagery. The developed approaches include two main stages: (i) detecting the building patches and (ii) delineating the building boundaries. The building patches are detected from high resolution satellite imagery using the Support Vector Machines (SVM) classification, which is performed for both the building extraction and updating approaches. In the building extraction part of the study, the previously detected building patches are delineated using the Hough transform and boundary tracing based techniques. In the Hough transform based technique, the boundary delineation is carried out using the processing operations of edge detection, Hough transformation, and perceptual grouping. In the boundary tracing based technique, the detected edges are vectorized using the boundary tracing algorithm. The results are then refined through line simplification and vector filters. In the building updating part of the study, the destroyed buildings are determined through analyzing the existing building boundaries and the previously detected building patches. The new buildings are delineated using the developed model based approach, in which the building models are selected from an existing building database by utilizing the shape parameters.
The developed approaches were tested in the Batikent district of Ankara, Turkey, using the IKONOS panchromatic and pan-sharpened stereo images (2002) and existing vector database (1999). The results indicate that the proposed approaches are quite satisfactory with the accuracies computed in the range from 68.60% to 98.26% for building extraction, and from 82.44% to 88.95% for building updating.
|
6 |
Ridge Orientation Modeling and Feature Analysis for Fingerprint IdentificationWang, Yi, alice.yi.wang@gmail.com January 2009 (has links)
This thesis systematically derives an innovative approach, called FOMFE, for fingerprint ridge orientation modeling based on 2D Fourier expansions, and explores possible applications of FOMFE to various aspects of a fingerprint identification system. Compared with existing proposals, FOMFE does not require prior knowledge of the landmark singular points (SP) at any stage of the modeling process. This salient feature makes it immune from false SP detections and robust in terms of modeling ridge topology patterns from different typological classes. The thesis provides the motivation of this work, thoroughly reviews the relevant literature, and carefully lays out the theoretical basis of the proposed modeling approach. This is followed by a detailed exposition of how FOMFE can benefit fingerprint feature analysis including ridge orientation estimation, singularity analysis, global feature characterization for a wide variety of fingerprint categories, and partial fin gerprint identification. The proposed methods are based on the insightful use of theory from areas such as Fourier analysis of nonlinear dynamic systems, analytical operators from differential calculus in vector fields, and fluid dynamics. The thesis has conducted extensive experimental evaluation of the proposed methods on benchmark data sets, and drawn conclusions about strengths and limitations of these new techniques in comparison with state-of-the-art approaches. FOMFE and the resulting model-based methods can significantly improve the computational efficiency and reliability of fingerprint identification systems, which is important for indexing and matching fingerprints at a large scale.
|
7 |
Towards patient selection for cranial proton beam therapy – Assessment of current patient-individual treatment decision strategiesDutz, Almut 27 November 2020 (has links)
Proton beam therapy shows dosimetric advantages in terms of sparing healthy tissue compared to conventional photon radiotherapy. Those patients who are supposed to experience the greatest reduction in side effects should preferably be treated with proton beam therapy. One option for this patient selection is the model-based approach. Its feasibility in patients with intracranial tumours is investigated in this thesis. First, normal tissue complication probability models for early and late side effects were developed and validated in external cohorts based on data of patients treated with proton beam therapy. Acute erythema as well as acute and late alopecia were associated with high-dose parameters of the skin. Late mild hearing loss was related to the mean dose of the ipsilateral cochlea. Second, neurocognitive function as a relevant side effect for brain tumour patients was investigated in detail using subjective and objective measures. It remained largely stable during recurrence-free follow-up until two years after proton beam therapy. Finally, potential toxicity differences were evaluated based on an individual proton and photon treatment plan comparison as well as on models predicting various side effects. Although proton beam therapy was able to achieve a high relative reduction of dose exposure in contralateral organs at risk, the associated reduction of side effect probabilities was less pronounced. Using a model-based selection procedure, the majority of the examined patients would have been eligible for proton beam therapy, mainly due to the predictions of a model on neurocognitive function.:1. Introduction
2. Theoretical background
2.1 Treatment strategies for tumours in the brain and skull base
2.1.1 Gliomas
2.1.2 Meningiomas
2.1.3 Pituitary adenomas
2.1.4 Tumours of the skull base
2.1.5 Role of proton beam therapy
2.2 Radiotherapy with photons and protons
2.2.1 Biological effect of radiation
2.2.2 Basic physical principles of radiotherapy
2.2.3 Field formation in radiotherapy
2.2.4 Target definition and delineation of organs at risk
2.2.5 Treatment plan assessment
2.3 Patient outcome
2.3.1 Scoring of side effects
2.3.2 Patient-reported outcome measures – Quality of life
2.3.3 Measures of neurocognitive function
2.4 Normal tissue complication probability models
2.4.1 Types of NTCP models
2.4.2 Endpoint definition and parameter fitting
2.4.3 Assessment of model performance
2.4.4 Model validation
2.5 Model-based approach for patient selection for proton beam therapy
2.5.1 Limits of randomised controlled trials
2.5.2 Principles of the model-based approach
3. Investigated patient cohorts
4. Modelling of side effects following cranial proton beam therapy
4.1 Experimental design for modelling early and late side effects
4.2 Modelling of early side effects
4.2.1 Results
4.2.2 Discussion
4.3 Modelling of late side effects
4.3.1 Results
4.3.2 Discussion
4.4 Interobserver variability of alopecia and erythema assessment
4.4.1 Patient cohort and experimental design
4.4.2 Results
4.4.3 Discussion
4.5 Summary
5. Assessing the neurocognitive function following cranial proton beam therapy
5.1 Patient cohort and experimental design
5.2 Results
5.2.1 Performance at baseline
5.2.2 Correlation between subjective and objective measures
5.2.3 Time-dependent score analyses
5.3 Discussion and conclusion
5.4 Summary
6. Treatment plan and NTCP comparison for patients with intracranial tumours
6.1 Motivation
6.2 Treatment plan comparison of cranial proton and photon radiotherapy
6.2.1 Patient cohort and experimental design
6.2.2 Results
6.2.3 Discussion
6.3 Application of NTCP models
6.3.1 Patient cohort and experimental design
6.3.2 Results
6.3.3 Discussion
6.4 Summary
7. Conclusion and further perspectives
8. Zusammenfassung
9. Summary
|
8 |
Architecture logicielle générique et approche à base de modèles pour la sûreté de fonctionnement des systèmes interactifs critiques / Genetic software architecture and model-based approach for the dependability of interactive criticalFayollas, Camille 21 July 2015 (has links)
Depuis l'introduction au début des années 2000 du standard ARINC 661 (définissant les interfaces graphiques dans les cockpits), les avions modernes, tels que l'A380, l'A350 ou le B787, intègrent des systèmes interactifs permettant à l'équipage d'interagir avec des applications interactives. Ces applications sont affichées sur des écrans à travers l'utilisation d'un dispositif similaire à un clavier et une souris. Pour des raisons d'exigences de sûreté de fonctionnement, l'utilisation de ces systèmes est limitée, à l'heure actuelle, à la commande et au contrôle de fonctions avioniques non critiques. Cependant, l'utilisation de ces systèmes dans les cockpits d'avions civils apporte de nombreux avantages (tels qu'une amélioration de l'évolutivité du cockpit) qui amènent les industriels à chercher comment l'étendre à la commande et le contrôle de systèmes avioniques critiques. Dans cette optique, nous proposons une approche duale et homogène de prévention et de tolérance aux fautes pour concevoir et développer des systèmes interactifs tolérants aux fautes. Celle-ci repose, dans un premier temps, sur une approche à base de modèles permettant de décrire de manière complète et non ambiguë les composants logiciels des systèmes interactifs et de prévenir les fautes logicielles de développement. Dans un second temps, elle repose sur une approche de tolérance aux fautes naturelles et certaines fautes logicielles résiduelles en opération, grâce à la mise en œuvre d'une solution architecturale fondée sur le principe des composants autotestables. Les contributions de la thèse sont illustrées sur une étude de cas de taille industrielle : une application interactive inspirée du système de commande et contrôle de l'autopilote de l'A380. / Since the introduction of the ARINC 661 standard (that defines graphical interfaces in the cockpits) in the early 2000, modern aircrafts such as the A380, the A350 or the B787 possess interactive systems. The crew interacts, through physical devices similar to keyboard and mouse, with interactive applications displayed on screens. For dependability reasons, only non-critical avionics systems are managed using such interactive systems. However, their use brings several advantages (such as a better upgradability), leading aircraft manufacturers to generalize the use of such interactive systems to the management of critical avionics functions. To reach this goal, we propose a dual and homogeneous fault prevention and fault tolerance approach. Firstly, we propose a model-based approach to describe in a complete and unambiguous way interactive software components to prevent as much as possible development software faults. Secondly, we propose a fault tolerant approach to deal with operational natural faults and some residual software faults. This is achieved through the implementation of a fault tolerant architecture based on the principle of self-checking components. Our approach is illustrated on a real size case study: an interactive application based on the command and control system of the A380 autopilot.
|
9 |
Évaluation de programmes de prétraitement de signal d'activité électrodermale (EDA)DeRoy, Claudéric 08 1900 (has links)
Lien vers le GitHub contenant tous les outils programmés dans le cadre du mémoire : https://github.com/neurok8050/eda-optimisation-processing-tool / L’activité électrodermale (EDA), particulièrement la skin conductance response (SCR), est un signal psychophysiologique fréquemment utilisé en recherche en psychologie et en neuroscience cognitive. L’utilisation de l’EDA entraîne son lot de défis particulièrement son prétraitement. En effet, encore très peu de recherches effectuent un prétraitement adéquat. Notre objectif est donc de promouvoir l’utilisation du prétraitement du signal SCR et de proposer des recommandations pour les chercheurs en fournissant des données sur l’impact du prétraitement sur la capacité à discriminer les SCR entre deux conditions expérimentales. En utilisant des travaux similaires, nous avons testé les effets de combinaisons de prétraitement utilisant différentes méthodes de filtrage, différentes méthodes de remise à l’échelle, l’inclusion d’une étape de détection automatique des artefacts de mouvement et en utilisant différentes métriques opérationnalistes (le peak-scoring (PS) et l’aire sous la courbe (AUC)) et d’approches par modèle. Enfin, nous avons testé si une seule combinaison de filtrage pourrait être utilisée avec différents jeux de données ou si le prétraitement devrait plutôt être ajusté individuellement à chaque jeu de données. Nos résultats suggèrent que 1) l’inclusion d’une étape de détection automatique des artefacts de mouvements n’affecte pas significativement la capacité à discriminer entre deux conditions expérimentales, 2) l’approche par modèle semble être un peu meilleure à discriminer entre deux conditions expérimentales et 3) la meilleure combinaison de prétraitement semble variée en fonction du jeu de données utilisé. Les données et outils présentés dans ce mémoire devraient permettre de promouvoir et faciliter le prétraitement du signal SCR. / Electrodermal activity (EDA), particularly the skin conductance response (SCR) is a psychophysiological signal frequently used in research in psychology and in cognitive neuroscience. Nevertheless, using EDA comes with some challenges notably in regard to its preprocessing. Indeed, very few research teams adequately preprocess their data. Our objective is to promote the implementation of SCR preprocessing and to offer some recommendations to researchers by providing some data on the effect of preprocessing on the SCR ability to discriminate between two experimental conditions. Based on similar work, we have tested the effect of preprocessing combinations using different filtering methods, different rescaling methods, the inclusion of an automatic motion detection step while using different operationalist metrics (peak-scoring (PS) and area under the curve (AUC)) and different model-based approach metrics. Finally, we tested if only one combination could be used across different datasets or if the preprocessing should be optimized individually to each dataset. Our results show that 1) the inclusion of the automatic motion detection step did not significantly impact the ability to discriminate between two experimental conditions, 2) the model-based approach seems to be slightly better at discriminating between two experimental conditions and 3) the best combination of preprocessing seems to vary between different datasets. The data and tools presented in this master thesis should promote and facilitate SCR signal preprocessing.
|
10 |
Some aspects of human performance in a Human Adaptive Mechatronics (HAM) systemParthornratt, Tussanai January 2011 (has links)
An interest in developing the intelligent machine system that works in conjunction with human has been growing rapidly in recent years. A number of studies were conducted to shed light on how to design an interactive, adaptive and assistive machine system to serve a wide range of purposes including commonly seen ones like training, manufacturing and rehabilitation. In the year 2003, Human Adaptive Mechatronics (HAM) was proposed to resolve these issues. According to past research, the focus is predominantly on evaluation of human skill rather than human performance and that is the reason why intensive training and selection of suitable human subjects for those experiments were required. As a result, the pattern and state of control motion are of critical concern for these works. In this research, a focus on human skill is shifted to human performance instead due to its proneness to negligence and lack of reflection on actual work quality. Human performance or Human Performance Index (HPI) is defined to consist of speed and accuracy characteristics according to a well-renowned speed-accuracy trade-off or Fitts' Law. Speed and accuracy characteristics are collectively referred to as speed and accuracy criteria with corresponding contributors referred to as speed and accuracy variables respectively. This research aims at proving a validity of the HPI concept for the systems with different architecture or the one with and without hardware elements. A direct use of system output logged from the operating field is considered the main method of HPI computation, which is referred to as a non-model approach in this thesis. To ensure the validity of these results, they are compared against a model-based approach based on System Identification theory. Its name is due to being involved with a derivation of mathematical equation for human operator and extraction of performance variables. Certain steps are required to match the processing outlined in that of non-model approach. Some human operators with complicated output patterns are inaccurately derived and explained by the ARX models.
|
Page generated in 0.0747 seconds