• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 124
  • 75
  • 26
  • 22
  • 13
  • 10
  • 10
  • 9
  • 8
  • 8
  • 7
  • 5
  • 4
  • 2
  • 1
  • Tagged with
  • 336
  • 50
  • 47
  • 47
  • 47
  • 46
  • 44
  • 43
  • 43
  • 42
  • 40
  • 39
  • 38
  • 37
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Empirische Untersuchung der Eignung von Code-Clones für den Nachweis der Redundanz als Treiber für die Evolution von Programmierkonzepten

Harnisch, Björn Ole 12 February 2018 (has links)
Bei der Entwicklung von Programmen werden durch Entwickler regelmäßig Code-Clones durch das Kopieren von Quellcode erzeugt. In dieser Arbeit wird ein Ansatz zur automatisierten Messung dieses duplizierten Codes mit Hilfe von Clone-Detection-Tools über mehrere Versionen von verschiedenen Software-Produkten gezeigt. Anhand der Historien von Code-Clones werden Einflüsse auf die Redundanzen dieser Software empirisch gemessen. Damit wird eine Grundlage für den Beweis, dass die Entwicklung von Programmiersprachen zu einem dominanten Teil durch Redundanzreduzierung getrieben wird, geschaffen.:Inhaltsverzeichnis Abstract I Inhaltsverzeichnis II 1 Einleitung 1 1.1 Problemstellung 1 1.2 Zielsetzung 1 1.3 Vorgehensweise 3 2 Vorbetrachtung 5 2.1 Programmierkonzepte 5 2.1.1 Definition 5 2.1.2 Programmierkonzepte in Java 5 2.2 Treiber für die Entwicklung von Programmierkonzepten 8 2.2.1 Arten der Treiber von Programmierkonzepten 9 2.2.2 Reduzierung von Redundanz in Software 10 2.2.2.1 Arten von Redundanz in Software 10 2.2.2.2 Code-Clones 11 2.2.2.3 Folgen von Redundanz in Software 13 2.2.3 Ansätze für den Nachweis von Redundanzreduzierung als Treiber 14 2.3 Auswahl Software Repositories für die Analysen 16 2.3.1 Arten von Software Repositories 16 2.3.2 Anforderung an Software Repositories 17 3 Erhebungsprozess für die Analyse von Software auf Clones 20 3.1 Aufbau des Erhebungsprozesses 20 3.1.1 Lösungsansatz 20 3.1.2 Prozessteuerung 21 3.2 Umgang mit Versionierung 22 3.2.1 Allgemein 22 3.2.2 Commit-Filter 24 3.3 Clone-Detection 25 3.3.1 Arten und Vertreter 25 3.3.2 Eigene Verwendung 28 3.3.2.1 Simian 28 3.3.2.2 CCFinderX 29 3.3.3 Laufzeitproblem und Lösungsansätze 31 3.4 Datenaggregation 32 4 Auswertung der Messungen 35 4.1 Vorgehensweise der Auswertung 35 4.2 Betrachtung von Code-Clone-Historien 35 4.3 Vergleich unterschiedlicher Konfigurationen 41 4.3.1 Vergleich unterschiedlicher Clone-Detection-Tools 41 4.3.2 Vergleich unterschiedlicher Commit-Filter 45 4.3.3 Vergleich unterschiedlicher Schwellwerte für die Erkennung 46 4.4 Untersuchung verschiedener Interessenpunkte 48 5 Nachbetrachtung 53 5.1 Fehlerbetrachtung 53 5.2 Erweiterungsmöglichkeiten 55 5.3 Schlussbemerkung 57 Anhang V Vorgehensweise der Literaturrecherchen V Verwendete Computerkonfiguration IX Beispiele für Dateien X Beispiel für Detailausgabe von Simian X Beispiel für Detailausgabe von CCFinderX XI Beispiel für aggregierte Daten XII Abbildungsverzeichnis XIII Tabellenverzeichnis XIV Programmtextverzeichnis XV Abkürzungsverzeichnis XVI Literaturverzeichnis XVII Eidesstattliche Erklärung XXIII
122

Coupled Modelling of Gas Migration in Host Rock and Application to a Potential Deep Geological Repository for Nuclear Wastes in Ontario

Wei, Xue 27 May 2022 (has links)
With the widening and increasing use of nuclear energy, it is very important to design and build long-term deep geological repositories (DGRs) to manage radioactive waste. The disposal of nuclear waste in deep rock formations is currently being investigated in several countries (e.g., Canada, China, France, Germany, India, Japan and Switzerland). In Canada, a repository for low and intermediate level radioactive waste is being proposed in Ontario’s sedimentary rock formations. During the post-closure phase of a repository, significant quantities of gas will be generated from several processes, such as corrosion of metal containers or microbial degradation of organic waste. The gas pressure could influence the engineered barrier system and host rock and might disturb the pressure-head gradients and groundwater flows near the repository. An increasing gas pressure could also cause damage to the host rock by inducing the development of micro-/macro-cracks. This will further cause perturbation to the hydrogeological properties of the host rock such as desiccation of the porous media, change in degree of saturation and hydraulic conductivity. In this regard, gas generation and migration may affect the stability or integrity of the integrate barriers and threaten the biosphere through the transmitting gaseous radionuclides as long-term contaminants. Thus, from the safety perspective of DGRs, gas generation and migration should be considered in their design and construction. The understanding and modelling of gas migration within the host rock (natural barrier) and the associated potential impacts on the integrity of the natural barrier are important for the safety assessment of a DGR. Therefore, the key objectives of this Ph.D. study include (i) the development of a simulator for coupled modelling of gas migration in the host rock of a DGR for nuclear waste; and (ii) the numerical investigation of gas migration in the host rock of a DGR for nuclear waste in Ontario by using the developed simulator. Firstly, a new thermo-hydro-mechanical-chemical (THMC) simulator (TOUGHREACT-COMSOL) has been developed to address these objectives. This simulator results from the coupling of the well-established numerical codes, TOUGHREACT and COMSOL. A series of mathematical models, which include an elastoplastic-damage model have been developed and then implemented into the simulator. Then, the predictive ability of the simulator is validated against laboratory and field tests on gas migration in host rocks. The validation results have shown that the developed simulator can predict well the gas migration in host rocks. This agreement between the predicted results and the experimental data indicates that the developed simulator can reasonably predict gas migration in DGR systems. The new simulator is used to predict gas migration and its effects in a potential DGR site in Ontario. Valuable results regarding gas migration in a potential DGR located in Ontario have been obtained. The research conducted in this Ph.D. study will provide a useful tool and information for the understanding and prediction of gas migration and its effect in a DGR, particularly in Ontario.
123

Measuring Diffusion Coefficients in Low-Porosity Rocks by X-Ray Radiography

Maldonado Sanchez, Guadalupe 12 November 2020 (has links)
Deep geological repositories (DGR) are considered an effective long-term solution for radioactive waste disposal. Sedimentary (argillaceous formations) and crystalline rocks are currently under investigation worldwide as potential host formations for DGR. Their low porosity (<1-2 %) and very low hydraulic conductivity result in diffusion-dominated solute transport. There is a need to investigate their diffusion properties in detail, the long-established diffusion methods do not allow an evaluation of the spatial relationship between tracers and the characteristics of the geological medium. The aim of this project was to measure diffusion coefficients in low-porosity rocks (< 2%) using X-ray radiography and iodide tracer. The method is a non-destructive technique based on the principle of X-ray attenuation; it provides temporal- and spatially-resolved information of a highly attenuating tracer diffusing in a sample. Samples from the Cobourg Formation, an Ordovician argillaceous limestone from the Michigan Basin, and from the Lac du Bonnet batholith, an Archean granitic pluton were used in this study. X-ray radiography data from the Cobourg Formation indicate tracer accumulation occurs on dark argillaceous layers in the rock characterized by clay minerals and organic matter. It is proposed that the I– tracer solution underwent photo-chemical oxidation, leading to the formation of I2, a highly reactive volatile iodine species and I3–, which readily reacted with humic substances contained in the clay- and organic rich zones in the limestone samples. In the case of the granitic samples, attempts at measuring diffusion coefficients encountered several challenges. The results indicate that tracer signal can be detected, however diffusion signal is masked by imaging errors and noise.
124

Programming and Conceptual Design Using Building Information Modeling

Avila, Mary-Alice 01 January 2009 (has links)
This thesis explores the benefits of using Building Information Modeling (BIM) during the programming and conceptual design phase of a project. The research was based on a case study undertaken dealing with the decisions and assumptions made during the design phases of the Center for Science at Cal Poly San Luis Obispo. The project team used a traditional approach to project plan development. The finding of this study was that the project process would have greatly benefited utilizing BIM tools and a collaborative team approach in the programming and conceptual design phase. Because decisions made early in the project have enormous implications to aesthetics and cost, the increase in analysis of design options afforded by the use of BIM tools would have minimized inaccurate, incomplete and unreliable information, and allowed the design team to work in a more efficient, collaborative manner transmitting through all phases of the project.
125

Implementace WebDAV rozhraní dokumentového skladu IS FIT / WebDAV Interface for IS FIT Document Repository

Jelínek, Tomáš January 2008 (has links)
This Master's thesis aim is implementation of WebDAV interface for IS FIT document repository in PHP language. It concurs to term project, that has dealt with protocol WebDAV and open source WebDAV server. Thesis discuss protocol WebDAV and it's meaning and related technologies: HTTP, XML, PHP and MySQL. Then it describes studied WebDAV server, IS FIT document repository, design and implementation of it's WebDAV interface. Final part describes cooperation with WebDAV clients and gives a summary and evaluation of achieved results.
126

On the Answer Status and Usage of Requirements Traceability Questions

Gupta, Arushi 24 October 2019 (has links)
No description available.
127

Meaningful Metrics in Software Engineering : The Value and Risks of Using Repository Metrics in a Company

Jacobsson, Frida January 2023 (has links)
Many large companies use various business intelligence solutions to filter, process, and visualize their software source code repository data. These tools focus on improving continuous integration and are used to get insights about people, products, and projects in the organization. However, research has shown that the quality of measurement programs in software engineering often is low since the science behind them is unexplored. In addition, code repositories contain a considerable amount of information about the developers, and several ethical and legal aspects need to be considered before using these tools, such as compliance with GDPR. This thesis aims to investigate how companies can use repository metrics and these business intelligence tools in a safe and valuable way. In order to answer the research questions, a case study was conducted in a Swedish company, and repository metrics from a real business intelligence tool were analyzed based on several questions. These questions were related to software measurement theory, ethical and legal aspects of software engineering and metrics, and institutionalized theory. The results show how these metrics could be of value to a company in different ways, for instance by visualization collaboration in a project or by differentiating between read and active repositories. These metrics could also be valuable by linking them to other data in the company such as bug reports and repository downloads. The findings show that the visualizations could potentially be perceived as a type of performance monitoring by developers, causing stress and unhealthy incitements in the organization. In addition, repository metrics are based on identifiable data from Git, which according to the GDPR is classified as personal data. Further, there is a risk that these tools are used simply because they are available, as a way to legitimize the company. In order to mitigate these risks, the thesis states that the metrics should be anonymized, and the focus of the metrics should be on teams and processes rather than individual developers. The teams themself should be a part of creating the Goal-Question-Metrics that link the metrics to what the teams wish to establish.
128

Foundational Data Repository for Numeric Engine Validation

Hollingsworth, Jason Michael 19 November 2008 (has links) (PDF)
Many different numeric models have been created to address a variety of hydraulic and hydrologic engineering applications. Each utilizes formulations and numeric methods to represent processes such as contaminant transport, coastal circulation, and watershed runoff. Although one process may be adequately represented by a model, this does not guarantee that another process will be represented even if that process is similar. For example, a model that computes subcritical flow does not necessarily compute supercritical flow. Selecting an appropriate numeric model for a situation is a prerequisite to obtaining accurate results. Current policies and resources do not provide adequate guidance in the model selection process. Available resources range from approved lists to guidelines for performing calculations to technical documentation of candidate numeric models. Many of these resources are available only from the developers of the numeric models. They focus on strengths with little or no mention of weaknesses or limitations. For this reason, engineers must make a selection based on publicity and/or familiarity rather than capability, often resulting in inappropriate application, frustration, and/or incorrect results. A comprehensive selection tool to aid engineers needs to test model capabilities by comparing model output with analytical solutions, laboratory tests, and physical case studies. The first step in building such a tool involves gathering and categorizing robust data the can be used for such model comparisons. A repository has been designed for this purpose, created, and made available to the engineering community. This repository can be found at http://verification.aquaveo.com. This allows engineers and regulators to store studies with assigned characteristics, as well as search and access studies based on a desired set of characteristics. Studies with characteristics similar to a desired project can help identify appropriate numeric models.
129

Research in Information Technology: Analysis of Existing Graduate Research

Cole, Christopher John 12 October 2009 (has links) (PDF)
Information Technology is an academic discipline that is well recognized by the academic community. There is an increasing number of schools offering degrees in Information Technology and has there is an official curriculum published with the ACM/IEEE computing Curriculum. A concern with Information Technology as an academic discipline is that it does not have a clearly defined set of research issues which are not studied by any other discipline. One way to propose this set of issues is to perform a “bottom-up” analysis and gather research in IT that has already been published. This research can then be analyzed for recurring themes. This research describes a repository of graduate level work in the form of master's degree theses and projects and doctoral dissertations. A keyword analyses was done on the publications gathered, and it was confirmed that a set of themes could be found. As a demonstration of the viability of this approach the methodology has identified five initial themes. A larger sample is required to define a definitive set of themes for the IT discipline.
130

Using Instance-Level Meta-Information to Facilitate a More Principled Approach to Machine Learning

Smith, Michael Reed 01 April 2015 (has links) (PDF)
As the capability for capturing and storing data increases and becomes more ubiquitous, an increasing number of organizations are looking to use machine learning techniques as a means of understanding and leveraging their data. However, the success of applying machine learning techniques depends on which learning algorithm is selected, the hyperparameters that are provided to the selected learning algorithm, and the data that is supplied to the learning algorithm. Even among machine learning experts, selecting an appropriate learning algorithm, setting its associated hyperparameters, and preprocessing the data can be a challenging task and is generally left to the expertise of an experienced practitioner, intuition, trial and error, or another heuristic approach. This dissertation proposes a more principled approach to understand how the learning algorithm, hyperparameters, and data interact with each other to facilitate a data-driven approach for applying machine learning techniques. Specifically, this dissertation examines the properties of the training data and proposes techniques to integrate this information into the learning process and for preprocessing the training set.It also proposes techniques and tools to address selecting a learning algorithm and setting its hyperparameters.This dissertation is comprised of a collection of papers that address understanding the data used in machine learning and the relationship between the data, the performance of a learning algorithm, and the learning algorithms associated hyperparameter settings.Contributions of this dissertation include:* Instance hardness that examines how difficult an instance is to classify correctly.* hardness measures that characterize properties of why an instance may be misclassified.* Several techniques for integrating instance hardness into the learning process. These techniques demonstrate the importance of considering each instance individually rather than doing a global optimization which considers all instances equally.* Large-scale examinations of the investigated techniques including a large numbers of examined data sets and learning algorithms. This provides more robust results that are less likely to be affected by noise.* The Machine Learning Results Repository, a repository for storing the results from machine learning experiments at the instance level (the prediction for each instance is stored). This allows many data set-level measures to be calculated such as accuracy, precision, or recall. These results can be used to better understand the interaction between the data, learning algorithms, and associated hyperparameters. Further, the repository is designed to be a tool for the community where data can be downloaded and uploaded to follow the development of machine learning algorithms and applications.

Page generated in 0.0485 seconds