• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 4
  • 2
  • 1
  • Tagged with
  • 27
  • 14
  • 7
  • 7
  • 7
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Estratégia computacional para apoiar a reprodutibilidade e reuso de dados científicos baseado em metadados de proveniência. / Computational strategy to support the reproducibility and reuse of scientific data based on provenance metadata.

Daniel Lins da Silva 17 May 2017 (has links)
A ciência moderna, apoiada pela e-science, tem enfrentado desafios de lidar com o grande volume e variedade de dados, gerados principalmente pelos avanços tecnológicos nos processos de coleta e processamento dos dados científicos. Como consequência, houve também um aumento na complexidade dos processos de análise e experimentação. Estes processos atualmente envolvem múltiplas fontes de dados e diversas atividades realizadas por grupos de pesquisadores geograficamente distribuídos, que devem ser compreendidas, reutilizadas e reproduzíveis. No entanto, as iniciativas da comunidade científica que buscam disponibilizar ferramentas e conscientizar os pesquisadores a compartilharem seus dados e códigos-fonte, juntamente com as publicações científicas, são, em muitos casos, insuficientes para garantir a reprodutibilidade e o reuso das contribuições científicas. Esta pesquisa objetiva definir uma estratégia computacional para o apoio ao reuso e a reprodutibilidade dos dados científicos, por meio da gestão da proveniência dos dados durante o seu ciclo de vida. A estratégia proposta nesta pesquisa é apoiada em dois componentes principais, um perfil de aplicação, que define um modelo padronizado para a descrição da proveniência dos dados, e uma arquitetura computacional para a gestão dos metadados de proveniência, que permite a descrição, armazenamento e compartilhamento destes metadados em ambientes distribuídos e heterogêneos. Foi desenvolvido um protótipo funcional para a realização de dois estudos de caso que consideraram a gestão dos metadados de proveniência de experimentos de modelagem de distribuição de espécies. Estes estudos de caso possibilitaram a validação da estratégia computacional proposta na pesquisa, demonstrando o seu potencial no apoio à gestão de dados científicos. / Modern science, supported by e-science, has faced challenges in dealing with the large volume and variety of data generated primarily by technological advances in the processes of collecting and processing scientific data. Therefore, there was also an increase in the complexity of the analysis and experimentation processes. These processes currently involve multiple data sources and numerous activities performed by geographically distributed research groups, which must be understood, reused and reproducible. However, initiatives by the scientific community with the goal of developing tools and sensitize researchers to share their data and source codes related to their findings, along with scientific publications, are often insufficient to ensure the reproducibility and reuse of scientific results. This research aims to define a computational strategy to support the reuse and reproducibility of scientific data through data provenance management during its entire life cycle. Two principal components support our strategy in this research, an application profile that defines a standardized model for the description of provenance metadata, and a computational architecture for the management of the provenance metadata that enables the description, storage and sharing of these metadata in distributed and heterogeneous environments. We developed a functional prototype for the accomplishment of two case studies that considered the management of provenance metadata during the experiments of species distribution modeling. These case studies enabled the validation of the computational strategy proposed in the research, demonstrating the potential of this strategy in supporting the management of scientific data.
12

Development of a protocol for 3-D reconstruction of brain aneurysms from volumetric image data

Welch, David Michael 01 July 2010 (has links)
Cerebral aneurysm formation, growth, and rupture are active areas of investigation in the medical community. To model and test the mechanical processes involved, small aneurysm (< 5 mm) segmentations need to be performed quickly and reliably for large patient populations. In the absence of robust automatic segmentation methods, the Vascular Modeling Toolkit (VMTK) provides scripts for the complex tasks involved in computer-assisted segmentation. Though these tools give researchers a great amount of flexibility, they also make reproduction of results between investigators difficult and unreliable. We introduce a VMTK pipeline protocol that minimizes the user interaction for vessel and aneurysm segmentation and a training method for new users. This protocol allows for decision tree handling for CTA and MRA images. Furthermore, we investigate the variation between two expert users and two novice users for six patients using shape index measures developed by Ma et al. and Raghavan et al.
13

Alpha-class Glutathione Transferases from Pig: a Comparative Study

Fedulova, Natalia January 2011 (has links)
Glutathione transferases (GSTs, EC 2.5.1.18) possess multiple functions and have potential applications in biotechnology. This thesis contributes to knowledge about glutathione transferases from Sus scrofa (pig). The study is needed for better understanding of biochemical processes in this species and is desirable for drug development, for food industry research and in medicine. A primary role of GSTs is detoxication of electrophilic compounds. Our study presents porcine GST A1-1 as a detoxication enzyme expressed in many tissues, in particular adipose tissue, liver and pituitary gland. Based on comparison of activity and expression profiles, this enzyme can be expected to function in vivo similarly to human GST A2-2 (Paper II). In addition to its protective function, human GST A3-3 is an efficient steroid isomerase and contributes to the biosynthesis of steroid hormones in vivo. We characterized a porcine enzyme, pGST A2-2, displaying high steroid-isomerase activity and resembling hGST A3-3 in other properties as well. High levels of pGST A2-2 expression were found in ovary, testis and liver. The properties of porcine enzyme strengthen the notion that particular GSTs play an important role in steroidogenesis (Paper I). Combination of time-dependent and enzyme concentration-dependent losses of activity as well as the choice of the organic solvent for substrates were found to cause irreproducibility of activity measurements of GSTs. Enzyme adsorption to surfaces was found to be the main explanation of high variability of activity values of porcine GST A2-2 and human Alpha-class GSTs reported in the literature. Several approaches to improved functional comparison of highly active GSTs were proposed (Paper III). / Felaktigt tryckt som Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology 733
14

Reproducible research, software quality, online interfaces and publishing for image processing

Limare, Nicolas 21 June 2012 (has links) (PDF)
This thesis is based on a study of reproducibility issues in image processing research. We designed, created and developed a scientific journal, Image Processing On Line (IPOL), in which articles are published with a complete implementation of the algorithms described, validated by the rapporteurs. A demonstration web service is attached, allowing testing of the algorithms with freely submitted data and an archive of previous experiments. We also propose copyrights and license policy, suitable for manuscripts and research software software, and guidelines for the evaluation of software. The IPOL scientific project seems very beneficial to research in image processing. With the detailed examination of the implementations and extensive testing via the demonstration web service, we publish articles of better quality. IPOL usage shows that this journal is useful beyond the community of its authors, who are generally satisfied with their experience and appreciate the benefits in terms of understanding of the algorithms, quality of the software produced, and exposure of their works and opportunities for collaboration. With clear definitions of objects and methods, and validated implementations, complex image processing chains become possible.
15

Reprodukovatelné experimenty s částečným zatížením v analýze agregace zátěže / Reproducible Partial-Load Experiments in Workload Colocation Analysis

Podzimek, Andrej January 2016 (has links)
Hardware concurrency is common in all contemporary computer systems. Efficient use of hardware resources requires parallel processing and sharing of hardware by multiple workloads. Striking a balance between the conflicting goals of keeping servers highly utilized and maintaining a predictable performance level requires an informed choice of performance isolation techniques. Despite a broad choice of resource isolation mechanisms in operating systems, such as pinning of workloads to disjoint sets of processors, little is known about their effects on overall system performance and power consumption, especially under partial load conditions common in practice. Performance and performance interference under partial processor load is analyzed only after the fact, based on historical data, rather than proactively tested. This dissertation contributes a systematic approach to experimental analysis of application performance under partial processor load and in workload colocation scenarios. We first present a software tool set called Showstopper, capable of achieving and sustaining a variety of partial processor load conditions. Based on arbitrary pre-existing computationally intensive workloads, Showstopper replays processor load traces using feedback control mechanisms to maintain the desired load. As opposed to...
16

Le progiciel PoweR : un outil de recherche reproductible pour faciliter les calculs de puissance de certains tests d'hypothèses au moyen de simulations de Monte Carlo

Tran, Viet Anh 06 1900 (has links)
Notre progiciel PoweR vise à faciliter l'obtention ou la vérification des études empiriques de puissance pour les tests d'ajustement. En tant que tel, il peut être considéré comme un outil de calcul de recherche reproductible, car il devient très facile à reproduire (ou détecter les erreurs) des résultats de simulation déjà publiés dans la littérature. En utilisant notre progiciel, il devient facile de concevoir de nouvelles études de simulation. Les valeurs critiques et puissances de nombreuses statistiques de tests sous une grande variété de distributions alternatives sont obtenues très rapidement et avec précision en utilisant un C/C++ et R environnement. On peut même compter sur le progiciel snow de R pour le calcul parallèle, en utilisant un processeur multicœur. Les résultats peuvent être affichés en utilisant des tables latex ou des graphiques spécialisés, qui peuvent être incorporés directement dans vos publications. Ce document donne un aperçu des principaux objectifs et les principes de conception ainsi que les stratégies d'adaptation et d'extension. / Package PoweR aims at facilitating the obtainment or verification of empirical power studies for goodness-of-fit tests. As such, it can be seen as a reproducible research computational tool because it becomes very easy to reproduce (or detect errors in) simulation results already published in the literature. Using our package, it becomes easy to design new simulation studies. The empirical levels and powers for many statistical test statistics under a wide variety of alternative distributions are obtained fastly and accurately using a C/C++ and R environment. One can even rely on package snow to parallelize their computations, using a multicore processor. The results can be displayed using LaTeX tables or specialized graphs, which can be directly incorporated into your publications. This paper gives an overview of the main design aims and principles as well as strategies for adaptation and extension. Hand-on illustrations are presented to get new users started easily.
17

Den (över)levande demokratin : En idékritisk analys av demokratins reproducerbarhet i Robert Dahls tänkta värld

Olsson, Karin January 2009 (has links)
Abstract Olsson, Karin (2009). Den (över)levande demokratin. En idékritisk analys av demokratins reproducerbarhet i Robert Dahls tänkta värld. (Sustainable Democracy. Exploring the Idea of a Reproducible Democracy in the Theory of Robert A. Dahl). Acta Wexionensia 185/2009, ISSN: 1404-4307, ISBN: 978-91-7636-677-6. With a summary in English.   Everybody loves democracy. The problem is that while everybody calls himself democratic, the ideal form of democracy is hard to come by in the real world. But if we believe in democracy and believe that it is the best form of government, I argue that we should try to design a theory of democracy that is realisable – and reproducible. This thesis, then, focuses primarily on the question whether we find support in democratic theory for an idea of a self-reproducing democracy. It proceeds by means of an investigation of Robert A. Dahl’s theory of democracy. He is one of the most well-known and highly regarded theorists in the field of democratic research, whose work covers both normative and empirical analysis. When analysing the reproducible democracy, I argue that it is essential to study both normative values and empirical assumptions: the values that count as intrinsic to democracy, the assumptions that are made about man, and the institutions that are needed for the realisable and reproducible democracy. In modern social science man is often pushed into the background. This is also the case in theories of democracy, even though man (the individual) is the one who has the right to vote, the one who has the autonomy to decide – the one who has to act democratically in order to preserve democracy. The study yields the following findings. First, in Dahl’s theory political equality and autonomy come out as intrinsic values. Second, the assumptions made about man show that even if he seems to be ignored, he is always present. When Dahl construes his theory, he does it with full attention to man’s qualities, interests, manners of acting and reacting, and adaptability to the values of democracy. Third, the institutions needed to realise and reproduce democracy go further than the institutions of polyarchy. They need support from the judicial system, political culture, education and the market. Fourth, when it comes down to making democracy work and reproducing democracy, Dahl puts the full responsibility on man as he is not willing to allow too rigid constitutional mechanisms. Fifth, even though Dahl puts the emphasis on the empirical situation of the real world, he does not alter his normative ideals in order to make the theory more adaptive. For him, political equality and autonomy are imperative demands, too important to alter. And the only way to get full procedural democracy is to trust the democratic man.
18

Le progiciel PoweR : un outil de recherche reproductible pour faciliter les calculs de puissance de certains tests d'hypothèses au moyen de simulations de Monte Carlo

Tran, Viet Anh 06 1900 (has links)
Notre progiciel PoweR vise à faciliter l'obtention ou la vérification des études empiriques de puissance pour les tests d'ajustement. En tant que tel, il peut être considéré comme un outil de calcul de recherche reproductible, car il devient très facile à reproduire (ou détecter les erreurs) des résultats de simulation déjà publiés dans la littérature. En utilisant notre progiciel, il devient facile de concevoir de nouvelles études de simulation. Les valeurs critiques et puissances de nombreuses statistiques de tests sous une grande variété de distributions alternatives sont obtenues très rapidement et avec précision en utilisant un C/C++ et R environnement. On peut même compter sur le progiciel snow de R pour le calcul parallèle, en utilisant un processeur multicœur. Les résultats peuvent être affichés en utilisant des tables latex ou des graphiques spécialisés, qui peuvent être incorporés directement dans vos publications. Ce document donne un aperçu des principaux objectifs et les principes de conception ainsi que les stratégies d'adaptation et d'extension. / Package PoweR aims at facilitating the obtainment or verification of empirical power studies for goodness-of-fit tests. As such, it can be seen as a reproducible research computational tool because it becomes very easy to reproduce (or detect errors in) simulation results already published in the literature. Using our package, it becomes easy to design new simulation studies. The empirical levels and powers for many statistical test statistics under a wide variety of alternative distributions are obtained fastly and accurately using a C/C++ and R environment. One can even rely on package snow to parallelize their computations, using a multicore processor. The results can be displayed using LaTeX tables or specialized graphs, which can be directly incorporated into your publications. This paper gives an overview of the main design aims and principles as well as strategies for adaptation and extension. Hand-on illustrations are presented to get new users started easily.
19

Building predictive models for dynamic line rating using data science techniques

Doban, Nicolae January 2016 (has links)
The traditional power systems are statically rated and sometimes renewable energy sources (RES) are curtailed in order not to exceed this static rating. The RES are curtailed because of their intermittent character and therefore, it is difficult to predict their output at specific time periods throughout the day. Dynamic Line Rating (DLR) technology can overcome this constraint by leveraging the available weather data and technical parameters of the transmission line. The main goal of the thesis is to present prediction models of Dynamic Line Rating (DLR) capacity on two days ahead and on one day ahead. The models are evaluated based on their error rate profiles. DLR provides the capability to up-rate the line(s) according to the environmental conditions and has always a much higher profile than the static rating. By implementing DLR a power utility can increase the efficiency of the power system, decrease RES curtailment and optimize their integration within the grid. DLR is mainly dependent on the weather parameters and specifically, in large wind speeds and low ambient temperature, the DLR can register the highest profile. Additionally, this is especially profitable for the wind energy producers that can both, produce more (until pitch control) and transmit more in high wind speeds periods with the same given line(s), thus increasing the energy efficiency.  The DLR was calculated by employing modern Data Science and Machine Learning tools and techniques and leveraged historical weather and transmission line data provided by SMHI and Vattenfall respectively. An initial phase of Exploratory Data Analysis (EDA) was developed to understand data patterns and relationships between different variables, as well as to determine the most predictive variables for DLR. All the predictive models and data processing routines were built in open source R and are available on GitHub. There were three types of models built: for historical data, for one day-ahead and for two days-ahead time-horizons. The models built for both time-horizons registered a low error rate profile of 9% (for day-ahead) and 11% (for two days-ahead). As expected, the predictive models built on historical data were more accurate with an error as low as 2%-3%.  In conclusion, the implemented models met the requirements set by Vattenfall of maximum error of 20% and they can be applied in the control room for that specific line. Moreover, predictive models can also be built for other lines if the required data is available. Therefore, this Master Thesis project’s findings and outcomes can be reproduced in other power lines and geographic locations in order to achieve a more efficient power system and an increased share of RES in the energy mix
20

Passerelle intelligente pour réseaux de capteurs sans fil contraints / Smart gateway for low-power and lossy networks

Leone, Rémy 24 July 2016 (has links)
Les réseaux de capteurs sans fil (aussi appelés LLNs en anglais) sont des réseaux contraints composés de nœuds ayant de faibles ressources (mémoire, CPU, batterie). Ils sont de nature très hétérogène et utilisés dans des contextes variés comme la domotique ou les villes intelligentes. Pour se connecter nativement à l’Internet, un LLN utilise une passerelle, qui a une vue précise du trafic transitant entre Internet et le LLN du fait de sa position. Le but de cette thèse est d’exposer comment des fonctionnalités peuvent être ajoutées à une passerelle d’un LLN dans le but d’optimiser l’utilisation des ressources limitées des nœuds contraints et d’améliorer la connaissance de leur état de fonctionnement. La première contribution est un estimateur non intrusif utilisant le trafic passant par la passerelle pour inférer l’utilisation de la radio des nœuds contraints. La seconde contribution adapte la durée de vie d’informations mises en cache (afin d’utiliser les ressources en cache au lieu de solliciter le réseau) en fonction du compromis entre le coût et l’efficacité. Enfin, la troisième contribution est Makesense, un framework permettant de documenter, d’exécuter et d’analyser une expérience pour réseaux de capteurs sans fil de façon reproductible à partir d’une description unique. / Low-Power and Lossy Network (LLN)s are constrained networks composed by nodes with little resources (memory, CPU, battery). Those networks are typically used to provide real-time measurement of their environment in various contexts such as home automation or smart cities. LLNs connect to other networks by using a gateway that can host various enhancing features due to its key location between constrained and unconstrained devices. This thesis shows three contributions aiming to improve the reliability and performance of a LLN by using its gateway. The first contribution introduce a non-intrusive estimator of a node radio usage by observing its network traffic passing through the gateway. The second contribution offers to determine the validity time of an information within a cache placed at the gateway to reduce the load on LLNs nodes by doing a trade-off between energy cost and efficiency. Finally, we present Makesense, an open source framework for reproducible experiments that can document, execute and analyze a complete LLN experiment on simulation or real nodes from a unique description.

Page generated in 0.269 seconds