• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 7
  • 7
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Real versus Simulated data for Image Reconstruction : A comparison between training with sparse simulated data and sparse real data / Verklig kontra simulerad data för bildrekonstruktion : En jämförelse mellan träning med gles simulerad data och gles verklig data

Maiga, Aïssata, Löv, Johanna January 2021 (has links)
Our study investigates how training with sparse simulated data versus sparse real data affects image reconstruction. We compared on several criteria such as number of events, speed and high dynamic range, HDR. The results indicate that the difference between simulated data and real data is not large. Training with real data performed often better, but only by 2%. The findings confirm what earlier studies have shown; training with simulated data generalises well, even when training on sparse datasets as this study shows. / Vår studie undersöker hur träning med gles simulerad data och gles verklig data från en eventkamera, påverkar bildrekonstruktion. Vi tränade två modeller, en med simulerad data och en med verklig för att sedan jämföra dessa på ett flertal kriterier som antal event, hastighet och high dynamic range, HDR. Resultaten visar att skillnaden mellan att träna med simulerad data och verklig data inte är stor. Modellen tränad med verklig data presterade bättre i de flesta fall, men den genomsnittliga skillnaden mellan resultaten är bara 2%. Resultaten bekräftar vad tidigare studier har visat; träning med simulerad data generaliserar bra, och som denna studie visar även vid träning på glesa datamängder.
2

Efficiency measurement : a methodological comparison of parametric and non-parametric approaches

Zheng, Wanyu January 2013 (has links)
The thesis examines technical efficiency using frontier efficiency estimation techniques from parametric and non-parametric approaches. Five different frontier efficiency estimation techniques are considered which are SFA, DFA, DEA-CCR, DEA-BCC and DEA-RAM. These techniques are then used on an artificially generated panel dataset using a two-input two-output production function framework based on characteristics of German life-insurers. The key contribution of the thesis is firstly, a study that uses simulated panel dataset to estimate frontier efficiency techniques and secondly, a research framework that compares multiple frontier efficiency techniques across parametric and non-parametric approaches in the context of simulated panel data. The findings suggest that, as opposed to previous studies, parametric and non-parametric approaches can both generate comparable technical efficiency scores with simulated data. Moreover, techniques from parametric approaches, i.e. SFA and DFA are consistent with each other whereas the same applies to non-parametric approaches, i.e. DEA models. The research study also discusses some important theoretical and methodological implication of the findings and suggests some ways whereby future research can enable to overcome some of the restrictions associated with current approaches.
3

EXAMINING THE CONFIRMATORY TETRAD ANALYSIS (CTA) AS A SOLUTION OF THE INADEQUACY OF TRADITIONAL STRUCTURAL EQUATION MODELING (SEM) FIT INDICES

Liu, Hangcheng 01 January 2018 (has links)
Structural Equation Modeling (SEM) is a framework of statistical methods that allows us to represent complex relationships between variables. SEM is widely used in economics, genetics and the behavioral sciences (e.g. psychology, psychobiology, sociology and medicine). Model complexity is defined as a model’s ability to fit different data patterns and it plays an important role in model selection when applying SEM. As in linear regression, the number of free model parameters is typically used in traditional SEM model fit indices as a measure of the model complexity. However, only using number of free model parameters to indicate SEM model complexity is crude since other contributing factors, such as the type of constraint or functional form are ignored. To solve this problem, a special technique, Confirmatory Tetrad Analysis (CTA) is examined. A tetrad refers to the difference in the products of certain covariances (or correlations) among four random variables. A structural equation model often implies that some tetrads should be zero. These model implied zero tetrads are called vanishing tetrads. In CTA, the goodness of fit can be determined by testing the null hypothesis that the model implied vanishing tetrads are equal to zero. CTA can be helpful to improve model selection because different functional forms may affect the model implied vanishing tetrad number (t), and models not nested according to the traditional likelihood ratio test may be nested in terms of tetrads. In this dissertation, an R package was created to perform CTA, a two-step method was developed to determine SEM model complexity using simulated data, and it is demonstrated how the number of vanishing tetrads can be helpful to indicate SEM model complexity in some situations.
4

Investigating Marine Resources in the Gulf of Mexico at Multiple Spatial and Temporal Scales of Inquiry

Kilborn, Joshua Paul 17 November 2017 (has links)
The work in this dissertation represents an attempt to investigate multiple temporal and spatial scales of inquiry relating to the variability of marine resources throughout the Gulf of Mexico large marine ecosystem (Gulf LME). This effort was undertaken over two spatial extents within the greater Gulf LME using two different time-series of fisheries monitoring data. Case studies demonstrating simple frameworks and best practices are presented with the aim of aiding researchers seeking to reduce errors and biases in scientific decision making. Two of the studies focused on three years of groundfish survey data collected across the West Florida Shelf (WFS), an ecosystem that occupies the eastern portion of the Gulf LME and which spans the entire latitudinal extent of the state of Florida. A third study was related to the entire area covered by the Gulf LME, and explored a 30-year dataset containing over 100 long-term monitoring time-series of indicators representing (1) fisheries resource status and structure, (2) human use patterns and resource extractions, and (3) large- and small-scale environmental and climatological characteristics. Finally, a fourth project involved testing the reliability of a popular new clustering algorithm in ecology using data simulation techniques. The work in Chapter Two, focused on the WFS, describes a quantitatively defensible technique to define daytime and nighttime groundfish assemblages, based on the nautical twilight starting and ending times at a sampling station. It also describes the differences between these two unique diel communities, the indicator species that comprise them, and environmental drivers that organize them at daily and inter-annual time scales. Finally, the differential responses in the diel, and inter-annual communities were used to provide evidence for a large-scale event that began to show an environmental signal in 2010 and subsided in 2011 and beyond. The event was manifested in the organization of the benthic fishes beginning weakly in 2010, peaking in 2011, and fully dissipating by 2012. The biotic effects of the event appeared to disproportionately affect the nighttime assemblage of fishes sampled on the WFS. Chapter Three explores the same WFS ecosystem, using the same fisheries-independent dataset, but also includes explicit modeling of the spatial variability captured by the sampling program undertaking the annual monitoring effort. The results also provided evidence of a disturbance that largely affected the nighttime fish community, and which was operating at spatial scales of variability that were larger than the extent of the shelf system itself. Like the previous study, the timing of this event is coincident with the 2010 Deepwater Horizon oil spill, the subsequent sub-marine dispersal of pollutants, and the cessation of spillage. Furthermore, the spatial models uncovered the influence of known spatial-abiotic gradients within the Gulf LME related to (1) depth, (2) temperature, and (3) salinity on the organization of daytime groundfish communities. Finally, the models developed also described which non-spatially structured abiotic variables were important to the observed beta-diversity. The ultimate results were the decomposition of the biotic response, within years and divided by diel classification, into the (1) pure-spatial, (2) pure-abiotic, (3) spatial-abiotic, and (4) unexplained fractions of variation. This study, along with that in Chapter Two, also highlighted the relative importance of the nighttime fish community to the assessment of the structure and function of the WFS, and the challenges associated with adequately sampling it, both in space and time. Because one focus of this dissertation was to develop low-decision frameworks and mathematically defensible alternatives to some common methods in fisheries ecology, Chapter Five employs a clustering technique to identify regime states that relies on hypothesis testing and the use of resemblance profiles as decision criteria. This clustering method avoids some of the arbitrary nature of common clustering solutions seen in ecology, however, it had never been rigorously subjected to numerical data simulation studies. Therefore, a formal investigation of the functional limits of the clustering method was undertaken prior to its use on real fisheries monitoring data, and is presented in Chapter Four. The results of this study are a set of recommendations for researchers seeking to utilize the new method, and the advice is applied in a case study in Chapter Five. Chapter Five presents the ecosystem-level fisheries indicator selection heuristic (EL-FISH) framework for examining long-term time-series data based on ecological monitoring for resources management. The focus of this study is the Gulf LME, encompassing the period of 1980-2011, and it specifically sought to determine to what extent the natural and anthropogenic induced environmental variability, including fishing extractions, affected the structure, function, and status of marine fisheries resources. The methods encompassed by EL-FISH, and the resulting ecosystem model that accounted for ~73% of the variability in biotic resources, allowed for (1) the identification and description of three fisheries resource regime state phase shifts in time, (2) the determination of the effects of fishing and environmental pressures on resources, and (3) providing context and evidence for trade-offs to be considered by managers and stakeholders when addressing fisheries management concerns. The EL-FISH method is fully transferrable and readily adapts to any set of continuous monitoring data.
5

Efficiency measurement. A methodological comparison of parametric and non-parametric approaches.

Zheng, Wanyu January 2013 (has links)
The thesis examines technical efficiency using frontier efficiency estimation techniques from parametric and non-parametric approaches. Five different frontier efficiency estimation techniques are considered which are SFA, DFA, DEA-CCR, DEA-BCC and DEA-RAM. These techniques are then used on an artificially generated panel dataset using a two-input two-output production function framework based on characteristics of German life-insurers. The key contribution of the thesis is firstly, a study that uses simulated panel dataset to estimate frontier efficiency techniques and secondly, a research framework that compares multiple frontier efficiency techniques across parametric and non-parametric approaches in the context of simulated panel data. The findings suggest that, as opposed to previous studies, parametric and non-parametric approaches can both generate comparable technical efficiency scores with simulated data. Moreover, techniques from parametric approaches, i.e. SFA and DFA are consistent with each other whereas the same applies to non-parametric approaches, i.e. DEA models. The research study also discusses some important theoretical and methodological implication of the findings and suggests some ways whereby future research can enable to overcome some of the restrictions associated with current approaches.
6

Comparing Two Algorithms for the Detection of Cross-Contamination in Simulated Tumor Next-Generation Sequencing Data

Persson, Sofie January 2024 (has links)
In this thesis, two algorithms that detect cross-contamination in tumor samples sequenced with next-generation sequencing (NGS) were evaluated. In genomic medicine, NGS is commonly used to sequence tumor DNA to detect disease-associated genetic variants and determine the most suitable treatment option. Targeted NGS panels are often employed to screen for genetic variations in a selection of specific tumor-associated genes. NGS handles samples from multiple patients in parallel, which poses a risk of cross-contamination between samples. Contamination is a significant issue in the interpretation of NGS results, as it can lead to the incorrect identification of genetic variants and, consequently, incorrect treatment. Therefore, contamination detection is a crucial quality control steps in the analysis of NGS data. Numerous algorithms for detection of cross-contamination have been developed, but many of these algorithms are not suited for small, targeted NGS panels, and several are not developed for tumor data. In this thesis, GATK's CalculateContamination and a self-created algorithm called ContaCheck were evaluated on simulated tumor NGS data. NGS samples were generated in silico with a Python script called BAMSynth and mixed to simulate cross-contamination with contamination rates between 1% and 50%. ContaCheck accurately detected contaminations ranging from 3% to 50% and identified the correct contaminant with an accuracy of 94%. CalculateContamination, on the other hand, detected contaminations ranging from 1% to 15% relatively accurately, but consistently failed to detect high level contaminations. The study showed that ContaCheck outperformed CalculateContamination on simulated NGS data, but to determine which algorithm is the best on real data and determine ContaCheck's applicability in a clinical setting, the algorithms need to be further evaluated on real tumor NGS samples.
7

Comparação de diferentes cenários de métodos de seleção, tipos de acasalamento e razões sexuais em características de codornas de corte usando simulação / Proportion of different scenarios of selection methods, mating types and sexual reasons in characteristics quail cutting using simulation

Lehner, Helmut Gonçalves 08 August 2013 (has links)
Made available in DSpace on 2015-03-26T13:55:19Z (GMT). No. of bitstreams: 1 texto completo.pdf: 2381950 bytes, checksum: 3e57a6a4857e06c924b3a18669ef4389 (MD5) Previous issue date: 2013-08-08 / Conselho Nacional de Desenvolvimento Científico e Tecnológico / This study aimed to compare different scenarios including methods of selection, mating and sexual reasons economic interest characteristics of quails in the long term. Were simulated for each trait (slaughter weight, carcass yield and mortality) different genomes similar to the European quail through the computer program GENESYS. From each genome was originally obtained from a population base of 1200 patients (600 males and 600 females) heterozygotes with no relationship to one another. Subsequently, the initial population was formed by random sampling and mating of 204 males and 204 females obtaining an initial population of 2040 individuals corresponding to an average number of offspring per female 10. Formed after the initial populations began the formation of the selected populations, a total of 24 for each characteristic, corresponding to scenarios formed by two selection methods (SI= Single and BP= BLUP), four sex ratios (D1= 204 males: 204 females; D2= 102 males: 204 females; D3= 68 males: 204 females, and D4= 51 males: 204 females) and three mating systems (RAA= Players randomly mated; EIC= Deleting Brothers Completion; EICMI= Deleting Brothers Full and Half-Brothers) over 20 generations with 10 repetitions. Populations under selection were compared using the parameters: phenotypic value, the average coefficient of inbreeding, fixation of favorable and unfavorable alleles and selection limit. It was observed that the method of selection, the phenotypic values were generally higher for the BLUP, especially in the characteristic slaughter weight. However, individuals undergoing BLUP resulted in higher increase in inbreeding coefficient, the greater percentage of fixation favorable and unfavorable alleles and greater reduction in threshold selection. The increase in sex ratios mainly influenced coefficients of inbreeding within the population. Mating systems that excluded the crossing among relatives (EIC and EICMI) were instrumental in reducing inbreeding, besides providing an increased fixation of favorable alleles and reduction in unfavorable allele fixation and selection limit. / Objetivou-se comparar diferentes cenários incluindo métodos de seleção, tipos de acasalamento e razões sexuais em características de interesse econômico de codornas de corte a longo prazo. Foram simulados para cada característica (peso ao abate, rendimento de carcaça e mortalidade) diferentes genomas similares ao da codorna européia por meio do programa computacional GENESYS. A partir de cada genoma foi obtido inicialmente uma população base de 1200 indivíduos (600 machos e 600 fêmeas) heterozigotos sem nenhum parentesco entre si. Posteriormente, foi formada a população inicial através da amostragem e acasalamento aleatório de 204 machos e 204 fêmeas obtendo uma população inicial de 2040 indivíduos correspondentes a um número médio de 10 descendentes por fêmea. Depois de formadas a populações iniciais, teve início à formação das populações de seleção, num total de 24 para cada característica, correspondendo aos cenários formados por dois métodos de seleção (SI= Individual e BP= BLUP), quatro razões sexuais (D1= 204 machos: 204 fêmeas; D2= 102 machos: 204 fêmeas; D3= 68 machos: 204 fêmeas; e D4= 51 machos: 204 fêmeas) e três sistemas de acasalamento (RAA= Reprodutores Acasalados Aleatoriamente; EIC= Exclusão de Irmãos Completos; EICMI= Exclusão de Irmãos Completos e Meio-Irmãos) ao longo de 20 gerações com 10 repetições. As populações sob seleção foram comparadas através dos parâmetros: valor fenotípico, coeficiente médio de endogamia, fixação de alelos favoráveis e desfavoráveis e limite da seleção. Observou-se que quanto ao método de seleção, os valores fenotípicos médios em geral foram superiores para o BLUP, principalmente na característica peso ao abate. Entretanto, os indivíduos submetidos ao BLUP resultaram em maior incremento do coeficiente de endogamia, maior percentagem de fixação de alelos favoráveis e desfavoráveis e a maior redução no limite de seleção. O aumento das razões sexuais influenciou principalmente os coeficientes de endogamia dentro da população. Os sistemas de acasalamento que excluíram o cruzamento entre aparentados (EIC e EICMI) foram determinantes na redução da endogamia, além de proporcionarem um aumento da fixação de alelos favoráveis e redução na fixação de alelos desfavoráveis e no limite da seleção.

Page generated in 0.087 seconds