• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 4
  • 2
  • 1
  • Tagged with
  • 28
  • 14
  • 8
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Development of a protocol for 3-D reconstruction of brain aneurysms from volumetric image data

Welch, David Michael 01 July 2010 (has links)
Cerebral aneurysm formation, growth, and rupture are active areas of investigation in the medical community. To model and test the mechanical processes involved, small aneurysm (< 5 mm) segmentations need to be performed quickly and reliably for large patient populations. In the absence of robust automatic segmentation methods, the Vascular Modeling Toolkit (VMTK) provides scripts for the complex tasks involved in computer-assisted segmentation. Though these tools give researchers a great amount of flexibility, they also make reproduction of results between investigators difficult and unreliable. We introduce a VMTK pipeline protocol that minimizes the user interaction for vessel and aneurysm segmentation and a training method for new users. This protocol allows for decision tree handling for CTA and MRA images. Furthermore, we investigate the variation between two expert users and two novice users for six patients using shape index measures developed by Ma et al. and Raghavan et al.
12

Alpha-class Glutathione Transferases from Pig: a Comparative Study

Fedulova, Natalia January 2011 (has links)
Glutathione transferases (GSTs, EC 2.5.1.18) possess multiple functions and have potential applications in biotechnology. This thesis contributes to knowledge about glutathione transferases from Sus scrofa (pig). The study is needed for better understanding of biochemical processes in this species and is desirable for drug development, for food industry research and in medicine. A primary role of GSTs is detoxication of electrophilic compounds. Our study presents porcine GST A1-1 as a detoxication enzyme expressed in many tissues, in particular adipose tissue, liver and pituitary gland. Based on comparison of activity and expression profiles, this enzyme can be expected to function in vivo similarly to human GST A2-2 (Paper II). In addition to its protective function, human GST A3-3 is an efficient steroid isomerase and contributes to the biosynthesis of steroid hormones in vivo. We characterized a porcine enzyme, pGST A2-2, displaying high steroid-isomerase activity and resembling hGST A3-3 in other properties as well. High levels of pGST A2-2 expression were found in ovary, testis and liver. The properties of porcine enzyme strengthen the notion that particular GSTs play an important role in steroidogenesis (Paper I). Combination of time-dependent and enzyme concentration-dependent losses of activity as well as the choice of the organic solvent for substrates were found to cause irreproducibility of activity measurements of GSTs. Enzyme adsorption to surfaces was found to be the main explanation of high variability of activity values of porcine GST A2-2 and human Alpha-class GSTs reported in the literature. Several approaches to improved functional comparison of highly active GSTs were proposed (Paper III). / Felaktigt tryckt som Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology 733
13

Reproducible research, software quality, online interfaces and publishing for image processing

Limare, Nicolas 21 June 2012 (has links) (PDF)
This thesis is based on a study of reproducibility issues in image processing research. We designed, created and developed a scientific journal, Image Processing On Line (IPOL), in which articles are published with a complete implementation of the algorithms described, validated by the rapporteurs. A demonstration web service is attached, allowing testing of the algorithms with freely submitted data and an archive of previous experiments. We also propose copyrights and license policy, suitable for manuscripts and research software software, and guidelines for the evaluation of software. The IPOL scientific project seems very beneficial to research in image processing. With the detailed examination of the implementations and extensive testing via the demonstration web service, we publish articles of better quality. IPOL usage shows that this journal is useful beyond the community of its authors, who are generally satisfied with their experience and appreciate the benefits in terms of understanding of the algorithms, quality of the software produced, and exposure of their works and opportunities for collaboration. With clear definitions of objects and methods, and validated implementations, complex image processing chains become possible.
14

Reliability Generalization: a Systematic Review and Evaluation of Meta-analytic Methodology and Reporting Practice

Holland, David F. (Educational consultant) 12 1900 (has links)
Reliability generalization (RG) is a method for meta-analysis of reliability coefficients to estimate average score reliability across studies, determine variation in reliability, and identify study-level moderator variables influencing score reliability. A total of 107 peer-reviewed RG studies published from 1998 to 2013 were systematically reviewed to characterize the meta-analytic methods employed and to evaluate quality of reporting practice against standards for transparency in meta-analysis reporting. Most commonly, RG studies meta-analyzed alpha coefficients, which were synthesized using an unweighted, fixed-effects model applied to untransformed coefficients. Moderator analyses most frequently included multiple regression and bivariate correlations employing a fixed-effects model on untransformed, unweighted coefficients. Based on a unit-weighted scoring system, mean reporting quality for RG studies was statistically less than that for a comparison study of 198 meta-analyses in the organizational sciences across 42 indicators; however, means were not statistically significantly different between the two studies when evaluating reporting quality on 18 indicators deemed essential to ethical reporting practice in meta-analyses. Since its inception a wide variety of statistical methods have been applied to RG, and meta-analysis of reliability coefficients has extended to fields outside of psychological measurement, such as medicine and business. A set of guidelines for conducting and reporting RG studies is provided.
15

Integration of Reproducibility Verification with Diffoscope in GNU Make / Integrering av reproducerbarhetsverifiering med diffoscope i GNU Make

Lagnöhed, Felix January 2024 (has links)
Software Supply Chain attacks are becoming more frequent. It is not enough to trust the source code of a project; the build process can insert malicious contents into build artefacts. This calls for the need of valid verification methods regarding the build process, and a good way of doing so is ensuring that the build process is deterministic. This means, that given two binaries built from the same source code and in the same environment, the resulting build artefacts should be bit-wise identical. There are existing tools that check this, but they are not integrated into build systems. This thesis resulted in an extension of GNU make which is called rmake, where diffoscope - a tool for detecting differences between a large number of file types - was integrated into the workflow of make. rmake was later used to answer the posed research questions for this thesis. We found that different build paths and offsets are a big problem as three out of three tested Free and Open Source Software projects all contained these variations. The results also showed that gcc’s optimisation levels did not affect reproducibility, but link-time optimisation embeds a lot of unreproducible information in build artefacts. Lastly, the results showed that build paths, build ID’s and randomness are the three most common groups of variations encountered in the wild and potential solutions for some variations were proposed.
16

Reprodukovatelné experimenty s částečným zatížením v analýze agregace zátěže / Reproducible Partial-Load Experiments in Workload Colocation Analysis

Podzimek, Andrej January 2016 (has links)
Hardware concurrency is common in all contemporary computer systems. Efficient use of hardware resources requires parallel processing and sharing of hardware by multiple workloads. Striking a balance between the conflicting goals of keeping servers highly utilized and maintaining a predictable performance level requires an informed choice of performance isolation techniques. Despite a broad choice of resource isolation mechanisms in operating systems, such as pinning of workloads to disjoint sets of processors, little is known about their effects on overall system performance and power consumption, especially under partial load conditions common in practice. Performance and performance interference under partial processor load is analyzed only after the fact, based on historical data, rather than proactively tested. This dissertation contributes a systematic approach to experimental analysis of application performance under partial processor load and in workload colocation scenarios. We first present a software tool set called Showstopper, capable of achieving and sustaining a variety of partial processor load conditions. Based on arbitrary pre-existing computationally intensive workloads, Showstopper replays processor load traces using feedback control mechanisms to maintain the desired load. As opposed to...
17

Le progiciel PoweR : un outil de recherche reproductible pour faciliter les calculs de puissance de certains tests d'hypothèses au moyen de simulations de Monte Carlo

Tran, Viet Anh 06 1900 (has links)
Notre progiciel PoweR vise à faciliter l'obtention ou la vérification des études empiriques de puissance pour les tests d'ajustement. En tant que tel, il peut être considéré comme un outil de calcul de recherche reproductible, car il devient très facile à reproduire (ou détecter les erreurs) des résultats de simulation déjà publiés dans la littérature. En utilisant notre progiciel, il devient facile de concevoir de nouvelles études de simulation. Les valeurs critiques et puissances de nombreuses statistiques de tests sous une grande variété de distributions alternatives sont obtenues très rapidement et avec précision en utilisant un C/C++ et R environnement. On peut même compter sur le progiciel snow de R pour le calcul parallèle, en utilisant un processeur multicœur. Les résultats peuvent être affichés en utilisant des tables latex ou des graphiques spécialisés, qui peuvent être incorporés directement dans vos publications. Ce document donne un aperçu des principaux objectifs et les principes de conception ainsi que les stratégies d'adaptation et d'extension. / Package PoweR aims at facilitating the obtainment or verification of empirical power studies for goodness-of-fit tests. As such, it can be seen as a reproducible research computational tool because it becomes very easy to reproduce (or detect errors in) simulation results already published in the literature. Using our package, it becomes easy to design new simulation studies. The empirical levels and powers for many statistical test statistics under a wide variety of alternative distributions are obtained fastly and accurately using a C/C++ and R environment. One can even rely on package snow to parallelize their computations, using a multicore processor. The results can be displayed using LaTeX tables or specialized graphs, which can be directly incorporated into your publications. This paper gives an overview of the main design aims and principles as well as strategies for adaptation and extension. Hand-on illustrations are presented to get new users started easily.
18

Den (över)levande demokratin : En idékritisk analys av demokratins reproducerbarhet i Robert Dahls tänkta värld

Olsson, Karin January 2009 (has links)
Abstract Olsson, Karin (2009). Den (över)levande demokratin. En idékritisk analys av demokratins reproducerbarhet i Robert Dahls tänkta värld. (Sustainable Democracy. Exploring the Idea of a Reproducible Democracy in the Theory of Robert A. Dahl). Acta Wexionensia 185/2009, ISSN: 1404-4307, ISBN: 978-91-7636-677-6. With a summary in English.   Everybody loves democracy. The problem is that while everybody calls himself democratic, the ideal form of democracy is hard to come by in the real world. But if we believe in democracy and believe that it is the best form of government, I argue that we should try to design a theory of democracy that is realisable – and reproducible. This thesis, then, focuses primarily on the question whether we find support in democratic theory for an idea of a self-reproducing democracy. It proceeds by means of an investigation of Robert A. Dahl’s theory of democracy. He is one of the most well-known and highly regarded theorists in the field of democratic research, whose work covers both normative and empirical analysis. When analysing the reproducible democracy, I argue that it is essential to study both normative values and empirical assumptions: the values that count as intrinsic to democracy, the assumptions that are made about man, and the institutions that are needed for the realisable and reproducible democracy. In modern social science man is often pushed into the background. This is also the case in theories of democracy, even though man (the individual) is the one who has the right to vote, the one who has the autonomy to decide – the one who has to act democratically in order to preserve democracy. The study yields the following findings. First, in Dahl’s theory political equality and autonomy come out as intrinsic values. Second, the assumptions made about man show that even if he seems to be ignored, he is always present. When Dahl construes his theory, he does it with full attention to man’s qualities, interests, manners of acting and reacting, and adaptability to the values of democracy. Third, the institutions needed to realise and reproduce democracy go further than the institutions of polyarchy. They need support from the judicial system, political culture, education and the market. Fourth, when it comes down to making democracy work and reproducing democracy, Dahl puts the full responsibility on man as he is not willing to allow too rigid constitutional mechanisms. Fifth, even though Dahl puts the emphasis on the empirical situation of the real world, he does not alter his normative ideals in order to make the theory more adaptive. For him, political equality and autonomy are imperative demands, too important to alter. And the only way to get full procedural democracy is to trust the democratic man.
19

Le progiciel PoweR : un outil de recherche reproductible pour faciliter les calculs de puissance de certains tests d'hypothèses au moyen de simulations de Monte Carlo

Tran, Viet Anh 06 1900 (has links)
Notre progiciel PoweR vise à faciliter l'obtention ou la vérification des études empiriques de puissance pour les tests d'ajustement. En tant que tel, il peut être considéré comme un outil de calcul de recherche reproductible, car il devient très facile à reproduire (ou détecter les erreurs) des résultats de simulation déjà publiés dans la littérature. En utilisant notre progiciel, il devient facile de concevoir de nouvelles études de simulation. Les valeurs critiques et puissances de nombreuses statistiques de tests sous une grande variété de distributions alternatives sont obtenues très rapidement et avec précision en utilisant un C/C++ et R environnement. On peut même compter sur le progiciel snow de R pour le calcul parallèle, en utilisant un processeur multicœur. Les résultats peuvent être affichés en utilisant des tables latex ou des graphiques spécialisés, qui peuvent être incorporés directement dans vos publications. Ce document donne un aperçu des principaux objectifs et les principes de conception ainsi que les stratégies d'adaptation et d'extension. / Package PoweR aims at facilitating the obtainment or verification of empirical power studies for goodness-of-fit tests. As such, it can be seen as a reproducible research computational tool because it becomes very easy to reproduce (or detect errors in) simulation results already published in the literature. Using our package, it becomes easy to design new simulation studies. The empirical levels and powers for many statistical test statistics under a wide variety of alternative distributions are obtained fastly and accurately using a C/C++ and R environment. One can even rely on package snow to parallelize their computations, using a multicore processor. The results can be displayed using LaTeX tables or specialized graphs, which can be directly incorporated into your publications. This paper gives an overview of the main design aims and principles as well as strategies for adaptation and extension. Hand-on illustrations are presented to get new users started easily.
20

Building predictive models for dynamic line rating using data science techniques

Doban, Nicolae January 2016 (has links)
The traditional power systems are statically rated and sometimes renewable energy sources (RES) are curtailed in order not to exceed this static rating. The RES are curtailed because of their intermittent character and therefore, it is difficult to predict their output at specific time periods throughout the day. Dynamic Line Rating (DLR) technology can overcome this constraint by leveraging the available weather data and technical parameters of the transmission line. The main goal of the thesis is to present prediction models of Dynamic Line Rating (DLR) capacity on two days ahead and on one day ahead. The models are evaluated based on their error rate profiles. DLR provides the capability to up-rate the line(s) according to the environmental conditions and has always a much higher profile than the static rating. By implementing DLR a power utility can increase the efficiency of the power system, decrease RES curtailment and optimize their integration within the grid. DLR is mainly dependent on the weather parameters and specifically, in large wind speeds and low ambient temperature, the DLR can register the highest profile. Additionally, this is especially profitable for the wind energy producers that can both, produce more (until pitch control) and transmit more in high wind speeds periods with the same given line(s), thus increasing the energy efficiency.  The DLR was calculated by employing modern Data Science and Machine Learning tools and techniques and leveraged historical weather and transmission line data provided by SMHI and Vattenfall respectively. An initial phase of Exploratory Data Analysis (EDA) was developed to understand data patterns and relationships between different variables, as well as to determine the most predictive variables for DLR. All the predictive models and data processing routines were built in open source R and are available on GitHub. There were three types of models built: for historical data, for one day-ahead and for two days-ahead time-horizons. The models built for both time-horizons registered a low error rate profile of 9% (for day-ahead) and 11% (for two days-ahead). As expected, the predictive models built on historical data were more accurate with an error as low as 2%-3%.  In conclusion, the implemented models met the requirements set by Vattenfall of maximum error of 20% and they can be applied in the control room for that specific line. Moreover, predictive models can also be built for other lines if the required data is available. Therefore, this Master Thesis project’s findings and outcomes can be reproduced in other power lines and geographic locations in order to achieve a more efficient power system and an increased share of RES in the energy mix

Page generated in 0.034 seconds