• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1398
  • 1018
  • 380
  • 88
  • 62
  • 59
  • 46
  • 38
  • 21
  • 19
  • 14
  • 12
  • 11
  • 9
  • 8
  • Tagged with
  • 3668
  • 1140
  • 591
  • 492
  • 384
  • 358
  • 302
  • 251
  • 249
  • 249
  • 229
  • 224
  • 217
  • 215
  • 209
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

Analysis and Visualization of Validation Results

Forss, Carl-Philip January 2015 (has links)
Usage of simulation models is an essential part in many modern engineering disci- plines. Computer models of complex physical systems can be used to expedite the design of control systems and reduce the number of physical tests. Model valida- tion tries to answer the question if the model is a good enough representation of the physical system. This thesis describes techniques to visualize multi-dimensional validation results and the search for an automated validation process. The work is focused on a simulation model of the Primary Environmental Control System of Gripen E, but can be applied on validation results from other simulation models. The result from the thesis can be divided into three major components, static validation, dynamic validation and model coverage. To present the results from the static validation different multi-dimensional visualization techniques are in- vestigated and evaluated. The visualizations are compared to each other and to properly depict the static validation status of the model, a combination of visual- izations are required. Two methods for validation of the dynamic performance of the model are examined. The first method uses the singular values of an error model estimated from the residual. We show that the singular values of the error model relay important information about the model’s quality but interpreting the result is a considerable challenge. The second method aims to automate a visual inspection procedure where interesting quantities are automatically computed. Coverage is a descriptor of how much of the applicable operating conditions that has been validated. Two coverage metrics, volumetric coverage and nearest neigh- bour coverage, are examined and the strengths and weaknesses of these metrics are presented. The nearest neighbour coverage metric is further developed to account for validation performance, resulting in a total static validation quantity.
342

Mass Spectrometry and Affinity Based Methods for Analysis of Proteins and Proteomes

Sundberg, Mårten January 2015 (has links)
Proteomics is a fast growing field and there has been a tremendous increase of knowledge the last two decades. Mass spectrometry is the most used method for analysis of complex protein samples. It can be used both in large scale discovery studies as well as in targeted quantitative studies. In parallel with the fast improvements of mass spectrometry-based proteomics there has been a fast growth of affinity-based methods. A common challenge is the large dynamic range of protein concentrations in biological samples. No method can today cover the whole dynamic range. If affinity and mass spectrometry-based proteomics could be used in better combination, this would be partly solved. The challenge for affinity-based proteomics is the poor specificity that has been seen for many of the commercially available antibodies. In mass spectrometry, the challenges are sensitivity and sample throughput. In this thesis, large scale approaches for validation of antibodies and other binders are presented. Protein microarrays were used in four validation studies and one was based on mass spectrometry. It is shown that protein microarrays can be valuable tools to check the specificity of antibodies produced in a large scale production. Mass spectrometry was shown to give similar results as Western blot and Immunohistochemistry regarding specificity, but did also provide useful information about which other proteins that were bound to the antibody. Mass spectrometry has many applications and in this thesis two methods contributing with new knowledge in animal proteomics are presented. A combination of high affinity depletion, SDS PAGE and mass spectrometry revealed 983 proteins in dog cerebrospinal fluid, of which 801 were marked as uncharacterized in UniProt. A targeted quantitative study of cat serum based on parallel reaction monitoring showed that mass spectrometry can be an applicable method instead of ELISA in animal proteomic studies. Mass spectrometry is a generic method and has the advantage of shorter and less expensive development costs for specific assays that are not hampered by cross-reactivity. Mass spectrometry supported by affinity based applications will be an attractive tool for further improvements in the proteomic field.
343

Validating wireless network simulations using direct execution

Mandke, Ketan Jayant, 1980- 11 July 2012 (has links)
Simulation is a powerful and efficient tool for studying wireless networks. Despite the widespread use of simulation, particularly in the study of IEEE 802.11-style networks (e.g., WLAN, mesh, and ad hoc networks), doubts about the credibility of simulation results still persist in the research community. These concerns stem, in part, from a lack of trust in some of the models used in simulation as they do not always accurately reflect reality. Models of the physical layer (PHY), in particular, are a key source of concern. The behavior of the physical layer varies greatly depending on the specifics of the wireless environment, making it difficult to characterize. Validation is the primary means of establishing trust in such models. We present an approach to validating physical layer models using the direct execution of a real PHY implementation inside the wireless simulation environment. This approach leverages the credibility inherent to testbeds, while maintaining the scalability and repeatability associated with simulation. Specifically, we use the PHY implementation from Hydra, a software-defined radio testbed, to validate the sophisticated physical layer model of a new wireless network simulator, called WiNS. This PHY model is also employed in other state-of-the-art network simulators, including ns-3. As such, this validation study also provides insight into the fidelity of other wireless network simulators using this model. This physical layer model is especially important because it is used to represent the physical layer for systems in 802.11-style networks. Network simulation is a particularly popular method for studying these kinds of wireless networks. We use direct-execution to evaluate the accuracy of our PHY model from the perspectives of different protocol layers. First, we characterize the link-level behavior of the physical layer under different wireless channels and impairments. We identify operating regimes where the model is accurate and show accountable difference where it is not. We then use direct-execution to evaluate the accuracy of the PHY model in the presence of interference. We develop "error-maps" that provide guidance to model users in evaluating the potential impact of model inaccuracy in terms of the interference in their own simulation scenarios. This part of our study helps to develop a better understanding of the fidelity of our model from a physical layer perspective. We also demonstrate the efficacy of direct-execution in evaluating the accuracy of our PHY model from the perspectives of the MAC and network layers. Specifically, we use direct-execution to investigate a rate-adaptive MAC protocol and an ad hoc routing protocol. This part of our study demonstrates how the semantics and policies of such protocols can influence the impact that a PHY model has on network simulations. We also show that direct-execution helps us to identify when a model that is inaccurate from the perspective of the PHY can still be used to generate trustworthy simulation results. The results of this study show that the leading physical layer model employed by WiNS and other state-of-the-art network simulators, including ns-3, is accurate under a limited set of wireless conditions. Moreover, our validation study demonstrates that direct-execution is an effective means of evaluating the accuracy of a PHY model and allows us to identify the operating conditions and protocol configurations where the model can be used to generate trustworthy simulation results. / text
344

Characterization of [18F]flutemetamol binding properties : A β-amyloid PET imaging ligand

Heurling, Kerstin January 2015 (has links)
The criteria for diagnosing Alzheimer’s disease (AD) have recently been revised to include the use of biomarkers for the in vivo presence of β-amyloid, one of the neuropathological hallmarks of AD. Examples of such biomarkers are positron emission tomography (PET) β-amyloid specific ligands, including [18F]flutemetamol. The aim of this thesis was to characterize the binding properties of [18F]flutemetamol from a tracer kinetic perspective as well as by validating binding measures through comparison with tissue pathology assessments. The applicability of previously developed kinetic models of tracer binding for voxel-based analysis was examined and compared to arterial input compartment modelling, the “gold standard” for PET quantification. Several voxel-based methods were found to exhibit high correlations with compartment modelling, including the semi-quantitative standardized uptake value ratio (SUVR). The kinetic components of [18F]flutemetamol uptake were also investigated without model assumptions using the data driven method spectral analysis, with binding to β-amyloid shown to relate to a slow kinetic component. The same component was also found to predominate in the uptake of white matter, known to be free of β-amyloid accumulation. White matter uptake was however possible to separate from β-amyloid binding based on the relative contribution of the slow component to the total volume of distribution. Uptake of [18F]flutemetamol as quantified using SUVR or assessed visually was found to correlate well with tissue pathology assessments. Classifying the brains of 68 deceased subjects who had undergone [18F]flutemetamol PET scanning ante mortem, based on the spatial distribution of β-amyloid according to pre-defined phases, revealed that abnormal uptake patterns of [18F]flutemetamol were only certain to be found in the last phase of β-amyloid accumulation. In the same cohort however, [18F]flutemetamol was also shown to accurately distinguish between subjects with AD and non-AD dementia. While this supports the use of [18F]flutemetamol in clinical settings for ruling out AD, the association of abnormal [18F]flutemetamol uptake to late phases of β-amyloid accumulation may limit the detection of early accumulation and pre-clinical stages of AD. It remains to be investigated whether application of voxel-based methods and slow component filtering may increase sensitivity, particularly in the context of clinical trials.
345

Using C-Alpha Geometry to Describe Protein Secondary Structure and Motifs

Williams, Christopher Joseph January 2015 (has links)
<p>X-ray crystallography 3D atomic models are used in a variety of research areas to understand and manipulate protein structure. Research and application are dependent on the quality of the models. Low-resolution experimental data is a common problem in crystallography which makes solving structures and producing the reliable models that many scientists depend on difficult.</p><p>In this work, I develop new, automated tools for validation and correction of low-resolution structures. These tools are gathered under the name CaBLAM, for C-alpha Based Low-resolution Annotation Method. CaBLAM uses a unique, C-alpha-geometry-based parameter space to identify outliers in protein backbone geometry, and to identify secondary structure that may be masked by modeling errors.</p><p>CaBLAM was developed in the Python programming language as part of the Phenix crystallography suite and the open CCTBX Project. It makes use of architecture and methods available in the CCTBX toolbox. Quality-filtered databases of high-resolution protein structures, especially the Top8000, were used to construct contours of expected protein behavior for CaBLAM. CaBLAM has also been integrated into the codebase for the Richardson Lab's online MolProbity validation service.</p><p>CaBLAM succeeds in providing useful validation feedback for protein structures in the 2.5-4.0A resolution range. This success demonstrates the relative reliability of the C-alpha; trace of a protein in this resolution range. Full mainchain information can be extrapolated from the C-alpha; trace, especially for regular secondary structure elements.</p><p>CaBLAM has also informed our approach to validation for low-resolution structures. Moderation of feedback, to reduce validation overload and to focus user attention on modeling errors that are both significant and correctable, is one of our goals. CaBLAM and the related methods that have grown around it demonstrate the progress towards this goal.</p> / Dissertation
346

Mind over matter : Non-cognitive assessments for the selection of the Swedish voluntary soldier of peace

Bäccman, Charlotte January 2015 (has links)
The purpose of this thesis was firstly, to investigate if the current selection system mirrors the task of international deployment and voluntariness. Secondly, to investigate if and how non-cognitive assessments of personality and resilience, individual aspects that seem underrepresented in the current selection system, may increment validity to the current selection system. Since 2012 the Swedish Armed Forces is an All-volunteer Force where young men and women voluntarily can apply for a military service. In contrast to conscription, military service today includes compulsory international deployments with different demands on the personnel’s range of possible abilities and skills as well as selection process—yet the current selection system may not sufficiently correspond to the changes. The thesis comprises four studies (Study I-IV) with relevant military samples, and aside from Study I, a validation of a short version personality questionnaire (PQ) being used in two of the subsequent studies, Study II-IV had a longitudinal design. Study II shows that the former selection system lacked prognostic value of soldiers’ performance during international deployment, and their ability to readjust at homecoming. Additionally, Study II shows that non-cognitive assessments can be used as predictors for readjustment. Study III indicates that international deployment does not need to be harmful for the psychological well-being and that good health seems to be a stable factor across time and situations. Thus, selection of “good health” and resilience may prove fruitful. Study IV suggests that high motivation to serve may have serious consequences for selection decisions and, in the long run, the recruits’ psychological well-being. In sum, this thesis suggests that the current selection system needs adaption to the task of repeated international deployments and to the voluntary applicant pool, and that non-cognitive assessment may increment validity. / Since the end of the Cold War the Swedish Armed Forces has undergone several changes regarding both task and personnel system. The task of national security does not only entail territorial defense but also international operations worldwide. In addition, the soldiers are no longer conscripts but young men and women who have volunteered to secure and uphold peace and democratic values. The purpose of this thesis was twofold: firstly, to investigate if the current selection system mirrors the recent refocus on international operations and voluntariness; secondly, to see if and how non-cognitive assessments of personality, health, and resilience increment validity to the current selection system in identifying individuals suitable for repeated international deployments. This work was guided by a series of tentative questions regarding both the selection system in particular, but also international deployments in general. The four papers in this thesis suggest that the current selection system need to be adapted to better correspond to repeated international deployments as well as to a voluntary applicant pool; and that non-cognitive assessments of personality, health, and resilience increment validity to the selection system.
347

Fuzzy land cover change detection and validation : a comparison of fuzzy and Boolean analyses in Tripoli City, Libya

Khmag, Abdulhakim Emhemad January 2013 (has links)
This research extends fuzzy methods to consider the fuzzy validation of fuzzy land cover data at the sub-pixel level. The study analyses the relationships between fuzzy memberships generated by field survey and those generated from the classification of remotely sensed data. In so doing it examines the variations in the relationship between observed and predicted fuzzy land cover classes. This research applies three land cover classification techniques: Fuzzy sets, Fuzzy c-means and Boolean classification, and develops three models to determine fuzzy land cover change. The first model is dependent on fuzzy object change. The second model depends on the sub-pixel change through a fuzzy change matrix, for both fuzzy sets and fuzzy c-means, to compute the fuzzy change, fuzzy loss and fuzzy gain. The third model is a Boolean change model which evaluates change on a pixel-by-pixel basis. The results show that using a fuzzy change analysis presents a subtle way of mapping a heterogeneous area with common mixed pixels. Furthermore, the results show that the fuzzy change matrix gives more detail and information about land cover change and is more appropriate than fuzzy object change because it deals with sub-pixel change. Finally the research has found that a fuzzy error matrix is more suitable than an error matrix for soft classification validation because it can compare the membership from the field with the classified image. From this research there arise some important points: • Fuzzy methodologies have the ability to define the uncertainties associated with describing the phenomenon itself and the ability to take into consideration the effect of mixed pixels. • This research compared fuzzy sets and fuzzy c-means, and found the fuzzy set is more suit-able than fuzzy c-means, because the latter suffers from some disadvantages, chiefly that the sum of membership values of a data point in all the clusters must be one, so the algorithm has difficulty in handling outlying points. • This research validates fuzzy classifications by determining the fuzzy memberships in the field and comparing them with the memberships derived from the classified image.
348

Parallel Computing in Statistical-Validation of Clustering Algorithm for the Analysis of High throughput Data

Atlas, Mourad 12 May 2005 (has links)
Currently, clustering applications use classical methods to partition a set of data (or objects) in a set of meaningful sub-classes, called clusters. A cluster is therefore a collection of objects which are “similar” among them, thus can be treated collectively as one group, and are “dissimilar” to the objects belonging to other clusters. However, there are a number of problems with clustering. Among them, as mentioned in [Datta03], dealing with large number of dimensions and large number of data items can be problematic because of computational time. In this thesis, we investigate all clustering algorithms used in [Datta03] and we present a parallel solution to minimize the computational time. We apply parallel programming techniques to the statistical algorithms as a natural extension to sequential programming technique using R. The proposed parallel model has been tested on a high throughput dataset. It is microarray data on the transcriptional profile during sporulation in budding yeast. It contains more than 6,000 genes. Our evaluation includes clustering algorithm scalability pertaining to datasets with varying dimensions, the speedup factor, and the efficiency of the parallel model over the sequential implementation. Our experiments show that the gene expression data follow the pattern predicted in [Datta03] that is Diana appears to be solid performer also the group means for each cluster coincides with that in [Datta03]. We show that our parallel model is applicable to the clustering algorithms and more useful in applications that deal with high throughput data, such as gene expression data.
349

A Review of Cross Validation and Adaptive Model Selection

Syed, Ali R 27 April 2011 (has links)
We perform a review of model selection procedures, in particular various cross validation procedures and adaptive model selection. We cover important results for these procedures and explore the connections between different procedures and information criteria.
350

Accurate Surveillance of Diabetes Mellitus in Nova Scotia within the General Population and the Five First Nations of Cape Breton

Clark, Roderick 03 October 2011 (has links)
Administrative data is one of the most commonly used data sources for diagnosed diabetes surveillance within Canada. Despite their widespread use, administrative case definitions have not been validated in many minority populations on which they are commonly used. Additionally, previous validation work has not evaluated the effect of conditional covariance between data sources, which has been widely shown to significantly bias parameter (sensitivity, specificity, and prevalence) estimation. Using administrative data and data sources which contained gold standard cases of diabetes, this thesis examined (1) the validity of commonly used administrative case definitions for identifying cases of diagnosed diabetes within an Aboriginal population at the sub-provincial level, and (2) the effect of conditional covariance on parameter estimates of an administrative case definition used to identify cases of diagnoses diabetes within the general population of Nova Scotia. We found significant differences in the sensitivity and specificity of a commonly used administrative case when applied to an Aboriginal population at the sub-provincial level. For the general population of Nova Scotia, we found that including a parameter to estimate conditional covariance between data sources resulted in significant variation in sensitivity, specificity, and prevalence estimates as compared to a study which did not consider this parameter. We conclude that work must continue to validate administrative case definitions both within minority populations and for the general population to enhance diabetes surveillance systems in Canada. / Validation study for administrative case definitions to identify cases of diagnosed diabetes in Canada

Page generated in 0.1034 seconds