• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 213
  • 45
  • 27
  • 26
  • 24
  • 21
  • 16
  • 15
  • 12
  • 7
  • 6
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 456
  • 71
  • 56
  • 55
  • 47
  • 40
  • 39
  • 35
  • 31
  • 31
  • 30
  • 30
  • 29
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Senstivity of Lattice Physics Modelling of the Canadian PT-SCWR to Changes in Lateral Coolant Density Gradients in a Channel

Scriven, Michael 06 1900 (has links)
The Pressure Tube Super Critical Water Reactor (PT-SCWR) is a design with a light water coolant operating at 25 MPa above the thermodynamic critical pressure, with a separated low pressure and temperature moderator, facilitated by a High E ciency Channel consisting of a pressure tube and a porous ceramic insulator tube. The 2011 AECL reference design is considered along with a 2012 benchmark. In the 2011 reference design the coolant is permitted to ow through the insulator. The insulator region has a temperature gradient from 881 K at the inner liner tube to 478 K at the pressure tube wall. The density of light water varies by an order of magnitude depending on the local enthalpy of the uid. The lateral coolant density is estimated as a radial function at ve axial positions with the lattice physics codes WIMS-AECL and Serpent. The lateral coolant density variations in the insulator region of the PT-SCWR cause strong reactivity and CVR e ects which vary heavily on axial location due to the changes in the estimated mass of coolant and the physical relocation of the coolant closer to the moderator, as the coolant is estimated to be least dense closer to the fuel region of the coolant ow. The beta version of Serpent 2 is used to explore the lateral coolant densities in the subchannel region of the insulator in the 2012 version of the PT-SCWR. A more advanced coolant density analysis with FLUENT is used to estimate the subchannel coolant density variation, which is linked to SERPENT 2s multi-physics interface, allowing the lattice code to measure the sensitivity of the model to the analysis of the subchannels. This analysis increases the reactivity of the PT-SCWR through the displacement of the coolant. Serpent 2 is accepted as a valid lattice code for PT-SCWR analysis. / Thesis / Master of Applied Science (MASc)
252

Comparison of graph databases and relational databases performance

Asplund, Einar, Sandell, Johan January 2023 (has links)
There has been a change of paradigm in which way information is being produced, processed, and consumed as a result of social media. While planning to store the data, it is important to choose a suitable database for the type of data, as unsuitable storage and analysis can have a noticeable impact on the system’s energy consumption. Additionally, effectively analyzing data is essential because deficient data analysis on a large dataset can lead to repercussions due to unsound decisions and inadequate planning. In recent years, an increasing amount of organizations have provided services that cannot be anymore achieved efficiently using relational databases. An alternative data storage is graph databases, which is a powerful solution for storing and searching for relationship-dense data. The research question that the thesis aims to answer is, how do state-of-the-art-graph database and relational database technologies compare with each other from a performance perspective in terms of time taken to query, CPU usage, memory usage, power usage, and temperature of the server? To answer the research question, an experimental study using analysis of variance will be performed. One relational database, MySQL, and two graph databases, ArangoDB and Neo4j, will be compared using a benchmark. The benchmark used is Novabench. The results from the post-hoc, KruskalWallis, and analysis of variances show that there are significant differences between the database technologies. This means the null hypothesis, that there is no significant difference, is rejected, and the alternative hypothesis, that there is a significant difference in performance between the database technologies in the aspects of Time to Query, Central Processing Unit usage, Memory usage, Average Energy usage, and temperature holds. In conclusion, the research question was answered. The study shows that Neo4j was the fastest at executing queries, followed by MySQL, and in last place ArangoDB. The results also showed that MySQL was more demanding on memory usage than the other database technologies.
253

Semiparametric Bayesian Joint Modeling with Applications in Toxicological Risk Assessment

Hwang, Beom Seuk 06 August 2013 (has links)
No description available.
254

Correlation Between the TCAP Test and ThinkLink Learnings Predictive Assessment Series Test in Reading Math and Science in a Tennessee School System.

Day, Jared Edwin 17 December 2011 (has links) (PDF)
The purpose of the study was to determine if a correlation existed between the Predictive Assessment Series (PAS) Test, marketed by Discovery Education, and the Tennessee Comprehensive Assessment Program (TCAP) Achievement Test in reading, math, and science for grade 4, grade 6, and grade 8. The study included 4th-grade, 6th-grade, and 8th-grade students during the 2008-2009 school year who had taken the ThinkLink Predictive Assessment Series for reading, math, and science in February 2009 and had taken the TCAP reading, math, and science test in April 2009. The approach of the study was quantitative in nature. Data were collected from one school system in East Tennessee. The school system had 5 elementary schools and 1 middle school. Data collection tools used in the study included results from the TCAP test using the paper and pencil format and a computer test, the ThinkLink PAS. Student scaled scores were used for determining the degree of correlation between the TCAP and PAS tests. The data were analyzed using the Statistical Program for the Social Sciences. Based on the analysis and findings of this study, using the ThinkLink PAS test appears to have been successful in predicting how well students will perform on the state assessment. Overall, the correlations between the PAS and TCAP were consistent across grades, across gender within grade levels, and with Title I and Non-Title I students. The findings also show that it was possible to calculate a predicted TCAP score in reading, mathematics, and science. This was an important finding because the ability of the PAS assessment to predict TCAP scores could be another tool to provide educators the opportunity to target students who are potentially at risk of not meeting state benchmark proficiency levels. Based on the findings, there appears to be a strong relationship between the ThinkLink PAS benchmark assessment and the TCAP assessment in reading, math, and science for grade 4, grade 6, and grade 8. The relationships between PAS and TCAP tests in reading, math, and science were consistent across gender within grade levels. According to the results of the test of homogeneity of slopes, the relationships between PAS and TCAP tests in reading, math, and science were also consistent across Title I and Non-Title I schools. The test of homogeneity of slopes showed the slopes regression lines for the scores of Title I and Non-Title I students were the same (parallel) for grade 4, grade 6, and grade 8. Overall, the correlations between PAS and TCAP scores for Title I and Non-Title I students were moderately strong to very strong. The predictive validity of the PAS provides educators valuable time to reteach grade level skills to students who are at risk of scoring nonproficient on the TCAP.
255

Improving performance of sequential code through automatic parallelization / Prestandaförbättring av sekventiell kod genom automatisk parallellisering

Sundlöf, Claudius January 2018 (has links)
Automatic parallelization is the conversion of sequential code into multi-threaded code with little or no supervision. An ideal implementation of automatic parallelization would allow programmers to fully utilize available hardware resources to deliver optimal performance when writing code. Automatic parallelization has been studied for a long time, with one result being that modern compilers support vectorization without any input. In the study, contemporary parallelizing compilers are studied in order to determine whether or not they can easily be used in modern software development, and how code generated by them compares to manually parallelized code. Five compilers, ICC, Cetus, autoPar, PLUTO, and TC Optimizing Compiler are included in the study. Benchmarks are used to measure speedup of parallelized code, these benchmarks are executed on three different sets of hardware. The NAS Parallel Benchmarks (NPB) suite is used for ICC, Cetus, and autoPar, and PolyBench for the previously mentioned compilers in addition to PLUTO and TC Optimizing Compiler. Results show that parallelizing compilers outperform serial code in most cases, with certain coding styles hindering the capability of them to parallelize code. In the NPB suite, manually parallelized code is outperformed by Cetus and ICC for one benchmark. In the PolyBench suite, PLUTO outperforms the other compilers to a great extent, producing code not only optimized for parallel execution, but also for vectorization. Limitations in code generated by Cetus and autoPar prevent them from being used in legacy projects, while PLUTO and TC do not offer fully automated parallelization. ICC was found to offer the most complete automatic parallelization solution, although offered speedups were not as great as ones offered by other tools. / Automatisk parallellisering innebär konvertering av sekventiell kod till multitrådad kod med liten eller ingen tillsyn. En idealisk implementering av automatisk parallellisering skulle låta programmerare utnyttja tillgänglig hårdvara till fullo för att uppnå optimal prestanda när de skriver kod. Automatisk parallellisering har varit ett forskningsområde under en längre tid, och har resulterat i att moderna kompilatorer stöder vektorisering utan någon insats från programmerarens sida. I denna studie studeras samtida parallelliserande kompilatorer för att avgöra huruvida de lätt kan integreras i modern mjukvaruutveckling, samt hur kod som dessa kompilatorer genererar skiljer sig från manuellt parallelliserad kod. Fem kompilatorer, ICC, Cetus, autoPar, PLUTO, och TC Optimizing Compiler inkluderas i studien. Benchmarks används för att mäta speedup av paralleliserad kod. Dessa benchmarks exekveras på tre skiljda hårdvaruuppsättningar. NAS Parallel Benchmarks (NPB) används som benchmark för ICC, Cetus, och autoPar, och PolyBench för samtliga kompilatorer i studien. Resultat visar att parallelliserande kompilatorer genererar kod som presterar bättre än sekventiell kod i de flesta fallen, samt att vissa kodstilar begränsar deras möjlighet att parallellisera kod. I NPB så presterar kod parallelliserad av Cetus och ICC bättre än manuellt parallelliserad kod för en benchmark. I PolyBench så presterar PLUTO mycket bättre än de andra kompilatorerna och producerar kod som inte endast är optimerad för parallell exekvering, utan också för vektorisering. Begränsningar i kod genererad av Cetus och autoPar förhindrar användningen av dessa redskap i etablerade projekt, medan PLUTO och TC inte är kapabla till fullt automatisk parallellisering. Det framkom att ICC erbjuder den mest kompletta lösningen för automatisk parallellisering, men möjliga speedups var ej på samma nivå som för de andra kompilatorerna.
256

Dynamic Drivers, Risk Management Practices, And Competitive Outcomes: Applying Multiple Research Methods

Deng, Xiyue January 2021 (has links)
No description available.
257

Benchmark Studies For Structural Health Monitoring Using Analytical And Experimental Models

Burkett, Jason Lee 01 January 2005 (has links)
The latest bridge inventory report for the United States indicates that 25% of the highway bridges are structurally deficient or functionally obsolete. With such a large number of bridges in this condition, safety and serviceability concerns become increasingly relevant along with the associated increase in user costs and delays. Biennial inspections have proven subjective and need to be coupled with standardized non-destructive testing methods to accurately assess a bridge's condition for decision making purposes. Structural health monitoring is typically used to track and evaluate performance, symptoms of operational incidents, anomalies due to deterioration and damage during regular operation as well as after an extreme event. Dynamic testing and analysis are concepts widely used for health monitoring of existing structures. Successful health monitoring applications on real structures can be achieved by integrating experimental, analytical and information technologies on real life, operating structures. Real-life investigations must be backed up by laboratory benchmark studies. In addition, laboratory benchmark studies are critical for validating theory, concepts, and new technologies as well as creating a collaborative environment between different researchers. To implement structural health monitoring methods and technologies, a physical bridge model was developed in the UCF structures laboratory as part of this thesis research. In this study, the development and testing of the bridge model are discussed after a literature review of physical models. Different aspects of model development, with respect to the physical bridge model are outlined in terms of design considerations, instrumentation, finite element modeling, and simulating damage scenarios. Examples of promising damage detection methods were evaluated for common damage scenarios simulated on the numerical and physical models. These promising damage indices were applied and directly compared for the same experimental and numerical tests. To assess the simulated damage, indices such as modal flexibility and curvature were applied using mechanics and structural dynamics theory. Damage indices based on modal flexibility were observed to be promising as one of the primary indicators of damage that can be monitored over the service life of a structure. Finally, this thesis study will serve an international effort that has been initiated to explore bridge health monitoring methodologies under the auspices of International Association for Bridge Maintenance and Safety (IABMAS). The data generated in this thesis research will be made available to researchers as well as practitioners in the broad field of structural health monitoring through several national and international societies, associations and committees such as American Society of Civil Engineers (ASCE) Dynamics Committee, and the newly formed ASCE Structural Health Monitoring and Control Committee.
258

Comparative Analysis of Load Balancing in Cloud Platforms for an Online Bookstore Web Application using Apache Benchmark

Pothuganti, Srilekha, Samanth, Malepiti January 2023 (has links)
Background :Cloud computing has transformed the landscape of application deploy-ment, offering on-demand access to compute resources, databases, and services viathe internet. This thesis explores the development of an innovative online book-storeweb application, harnessing the power of cloud infrastructure across AWS,Azure, andGCP. The front end utilises HTML, CSS, and JavaScript to create responsive webpages with an intuitive user interface. The back-end is constructed using Node.jsand Express for high-performance server-side logic and routing, while MongoDB, adistributed NoSQL database, stores the data. This cloud-native architecture facili-tates easy scaling and ensures high availability. Objectives: The main objectives of this thesis are to develop an intuitive onlinebookstore enabling users to add, exchange, and purchase books, deploy it acrossAWS, Azure, and GCP for scalability, implement load balancers for enhanced per-formance, and conduct load testing and benchmarking to compare the efficiency ofthese load balancers. The study aims to determine the best-performing cloud plat-form and load-balancing strategy to ensure an exceptional user experience for ouronline bookstore. Comparing load balancer data across these platforms to determinetheir performance ensures the best user experience for our online bookstore by takingthe metrics. Methods: The website deployment is done on three cloud platforms by creatinginstances separately on each platform, and then the load balance is created for eachof the services. By using the monitoring tools of every platform, we get the resultinggraphs for the metrics. From this, we increase and decrease the load in the ApacheBenchmark tool by taking the specific tasks from the website and comparing thevisualisation of the results done in an aggregate graph and summary reports. It isthen used to test the website’s overall performance by using metrics like throughput,CPU utilisation, error percentage, and cost efficiency. Results: The results are based on the Apache Benchmark Load Testing Tool of aselected website between the cloud platforms. The results of AWS, Azure, and GCPcan be shown in the aggregate graph. The graph results are based on the testingtool to determine which service is best for users because it shows less load on theserver and requests data in the shortest amount of time. We have considered 10 and50 requests, and based on the results, we have compared the metrics of throughput,CPU utilisation, error percentage, and cost efficiency. The 10 and 50 requests’ resultsare compared to determine which cloud platform performs better. Conclusions: According to the results from the 10 and 50 requests, it can be con-cluded that GCP has a higher throughput and CPU utilisation than AWS and Azure.They are less flexible and efficient for users. Thus, it can be said that GCP outper-forms in terms of load balancing.
259

Benchmarking structure from motion algorithms with video footage taken from a drone against laser-scanner generated 3D models

Martell, Angel Alfredo January 2017 (has links)
Structure from motion is a novel approach to generate 3D models of objects and structures. The dataset simply consists of a series of images of an object taken from different positions. The ease of the data acquisition and the wide array of available algorithms makes the technique easily accessible. The structure from motion method identifies features in all the images from the dataset, like edges with gradients in multiple directions, and tries to match these features between all the images and then computing the relative motion that the camera was subject to between any pair of images. It builds a 3D model with the correlated features. It then creates a 3D point cloud with colour information of the scanned object. There are different implementations of the structure from motion method that use different approaches to solve the feature-correlation problem between the images from the data set, different methods for detecting the features and different alternatives for sparse reconstruction and dense reconstruction as well. These differences influence variations in the final output across distinct algorithms. This thesis benchmarked these different algorithms in accuracy and processing time. For this purpose, a terrestrial 3D laser scanner was used to scan structures and buildings to generate a ground truth reference to which the structure from motion algorithms were compared. Then a video feed from a drone with a built-in camera was captured when flying around the structure or building to generate the input for the structure from motion algorithms. Different structures are considered taking into account how rich or poor in features they are, since this impacts the result of the structure from motion algorithms. The structure from motion algorithms generated 3D point clouds, which then are analysed with a tool like CloudCompare to benchmark how similar it is to the laser scanner generated data, and the runtime was recorded for comparing it across all algorithms. Subjective analysis has also been made, such as how easy to use the algorithm is and how complete the produced model looks in comparison to the others. In the comparison it was found that there is no absolute best algorithm, since every algorithm highlights in different aspects. There are algorithms that are able to generate a model very fast, managing to scale the execution time linearly in function of the size of their input, but at the expense of accuracy. There are also algorithms that take a long time for dense reconstruction, but generate almost complete models even in the presence of featureless surfaces, like COLMAP modified PatchMacht algorithm. The structure from motion methods are able to generate models with an accuracy of up to \unit[3]{cm} when scanning a simple building, where Visual Structure from Motion and Open Multi-View Environment ranked among the most accurate. It is worth highlighting that the error in accuracy grows as the complexity of the scene increases. Finally, it was found that the structure from motion method cannot reconstruct correctly structures with reflective surfaces, as well as repetitive patterns when the images are taken from mid to close range, as the produced errors can be as high as \unit[1]{m} on a large structure.
260

The scheduling of manufacturing systems using Artificial Intelligence (AI) techniques in order to find optimal/near-optimal solutions.

Maqsood, Shahid January 2012 (has links)
This thesis aims to review and analyze the scheduling problem in general and Job Shop Scheduling Problem (JSSP) in particular and the solution techniques applied to these problems. The JSSP is the most general and popular hard combinational optimization problem in manufacturing systems. For the past sixty years, an enormous amount of research has been carried out to solve these problems. The literature review showed the inherent shortcomings of solutions to scheduling problems. This has directed researchers to develop hybrid approaches, as no single technique for scheduling has yet been successful in providing optimal solutions to these difficult problems, with much potential for improvements in the existing techniques. The hybrid approach complements and compensates for the limitations of each individual solution technique for better performance and improves results in solving both static and dynamic production scheduling environments. Over the past years, hybrid approaches have generally outperformed simple Genetic Algorithms (GAs). Therefore, two novel priority heuristic rules are developed: Index Based Heuristic and Hybrid Heuristic. These rules are applied to benchmark JSSP and compared with popular traditional rules. The results show that these new heuristic rules have outperformed the traditional heuristic rules over a wide range of benchmark JSSPs. Furthermore, a hybrid GA is developed as an alternate scheduling approach. The hybrid GA uses the novel heuristic rules in its key steps. The hybrid GA is applied to benchmark JSSPs. The hybrid GA is also tested on benchmark flow shop scheduling problems and industrial case studies. The hybrid GA successfully found solutions to JSSPs and is not problem dependent. The hybrid GA performance across the case studies has proved that the developed scheduling model can be applied to any real-world scheduling problem for achieving optimal or near-optimal solutions. This shows the effectiveness of the hybrid GA in real-world scheduling problems. In conclusion, all the research objectives are achieved. Finaly, the future work for the developed heuristic rules and the hybrid GA are discussed and recommendations are made on the basis of the results. / Board of Trustees, Endowment Fund Project, KPK University of Engineering and Technology (UET), Peshawar and Higher Education Commission (HEC), Pakistan

Page generated in 0.0257 seconds