• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 25
  • 25
  • 16
  • 6
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Computer analysis of Cross Canyon culvert

Lee, Chiang-Yung January 1982 (has links)
Buried culvert has its contribution in improving the convenience and quality of human life. The research of buried culvert in the past decades makes it to serve people more widely and efficiently. The research project of Cross Canyon culvert is one of them. A computer program was used in this study. The overburden-dependent soil model was chosen to represent the stress states of the soil in the backfill. The triaxial shear test data were converted into an overburden-dependent soil model and then this converted soil model was modified. The modified soil model was obtained when the difference of the measured and computed crown vertical displacement was minimized. The parametric studies were done after the modified soil model was obtained. In this research, the parametric studies were (1) Effects of different inclusion material on the culvert, (2) Effects of polystyrene plank wrapped around the culvert, (3) Effects of concrete bedding, (4) Effects of compaction. It was found that the material of inclusion had great influence on the moments of the culvert between the position of crown and 45 degrees. The concrete bedding was not a good practice because the moments were increased largely compared with those moments without concrete bedding. Finally, the compaction did not have much effects on the behavior of the culvert. / Master of Science
12

Processor and postprocessor for a plane frame analysis program on the IBM PC

Ghabra, Fawwaz I. 15 November 2013 (has links)
In this thesis, a PROCESSOR and a POSTPROCESSOR are developed for a plane frame analysis computer program on the IBM PC. The PROCESSOR reads the data prepared by a PREPROCESSOR and solves for the unknown joint displacement using the matrix displacement method. The POSTPROCESSOR uses the results of the PROCESSOR to obtain the required responses of the structure. A chapter on testing procedures is also provided. / Master of Science
13

Test Data Extraction and Comparison with Test Data Generation

Raza, Ali 01 August 2011 (has links)
Testing an integrated information system that relies on data from multiple sources can be a challenge, particularly when the data is confidential. This thesis describes a novel test data extraction approach, called semantic-based test data extraction for integrated systems (iSTDE) that solves many of the problems associated with creating realistic test data for integrated information systems containing confidential data. iSTDE reads a consistent cross-section of data from the production databases, manipulates that data to obscure individual identities while still preserving overall semantic data characteristics that are critical to thorough system testing, and then moves that test data to an external test environment. This thesis also presents a theoretical study that compares test-data extraction with a competing technique, named test-data generation. Specifically, this thesis a) describes a comparison method that includes a comprehensive list of characteristics essential for testing the database applications organized into seven different areas, b) presents an analysis of the relative strengths and weaknesses of the different test-data creation techniques, and c) reports a number of specific conclusions that will help testers make appropriate choices.
14

Odhad výkonnosti diskových polí s využitím prediktivní analytiky / Estimating performance of disk arrays using predictive analytics

Vlha, Matej January 2017 (has links)
Thesis focuses on disk arrays, where the goal is to design test scenarios to measure performance of disk array and use predictive analytics tools to train a model that will predict the selected performance parameter on a measured set of data. The implemented web application demonstrates the functionality of the trained model and shows estimate of the disk array performance.
15

The miniature electrical cone penetrometer and data acquisition system

Kwiatkowski, Terese Marie January 1985 (has links)
The static cone penetrometer is an in-situ testing tool which was originally developed to derive information on soil type and soil strength. More recently, it has found application in liquefaction assessment. Typical cone penetrometers are heavy duty devices which are operated with the assistance of a drill rig. However, this capacity is not necessary in the case of field studies of liquefaction, since liquefaction usually occurs at relatively shallow depths. This thesis is directed to the goal of the development of a miniature, lightweight cone penetrometer which can be used in earthquake reconnaissance studies related to liquefaction problems. The research for this thesis involved four principal objectives: 1. Development of procedures to automatically acquire and process measurements from a miniature electrical cone; 2. Develop and perform tests in a model soil-filled bin to calibrate the cone; 3. Evaluate the utility and accuracy of the cone results as a means to assess conventional soil properties; and, 4. Conduct a preliminary evaluation of the cone results in the context of recently developed methods to predict liquefaction potential. The work in regard to the first objective involved assembling and writing software for a microcomputer based data acquisition system. Successful implementation of this system allowed data from the tests to be rapidly processed and displayed. Calibration tests with the cone were carried out in a four foot high model bin which was filled ten times with sand formed to variety of densities. The sand used is Monterey No. 0/30, a standard material with well known behavioral characteristics under static and dynamic loading. The test results showed the cone to produce consistent data, and to be able to readily distinguish the varying density configurations of the sand. Using the results in conventional methods for converting cone data into soil parameters yielded values which were consistent with those expected. Liquefaction potential predictions were less satisfying, although not unreasonable. Further research is needed in this area both to check the reliability of the prediction procedures and the ability to achieve the desired objectives. / M.S.
16

Extraction of eigen-pairs from beam structures using an exact element based on a continuum formulation and the finite element method

Jara-Almonte, J. January 1985 (has links)
Studies of numerical methods to decouple structure and fluid interaction have reported the need for more precise approximations of higher structure eigenvalues and eigenvectors than are currently available from standard finite elements. The purpose of this study is to investigate hybrid finite element models composed of standard finite elements and exact-elements for the prediction of higher structure eigenvalues and eigenvectors. An exact beam-element dynamic-stiffness formulation is presented for a plane Timoshenko beam with rotatory inertia. This formulation is based on a converted continuum transfer matrix and is incorporated into a typical finite element program for eigenvalue/vector problems. Hybrid models using the exact-beam element generate transcendental, nonlinear eigenvalue problems. An eigenvalue extraction technique for this problem is also implemented. Also presented is a post-processing capability to reconstruct the mode shape each of exact element at as many discrete locations along the element as desired. The resulting code has advantages over both the standard transfer matrix method and the standard finite element method. The advantage over the transfer matrix method is that complicated structures may be modeled with the converted continuum transfer matrix without having to use branching techniques. The advantage over the finite element method is that fewer degrees of freedom are necessary to obtain good approximations for the higher eigenvalues. The reduction is achieved because the incorporation of an exact-beam-element is tantamount to the dynamic condensation of an infinity of degrees of freedom. Numerical examples are used to illustrate the advantages of this method. First, the eigenvalues of a fixed-fixed beam are found with purely finite element models, purely exact-element models, and a closed-form solution. Comparisons show that purely exact-element models give, for all practical purposes, the same eigenvalues as a closed-form solution. Next, a Portal Arch and a Verdeel Truss structure are modeled with hybrid models, purely finite element, and purely exact-element models. The hybrid models do provide precise higher eigenvalues with fewer degrees of freedom than the purely finite element models. The purely exact-element models were the most economical for obtaining higher structure eigenvalues. The hybrid models were more costly than the purely exact-element models, but not as costly as the purely finite element models. / Ph. D.
17

Bayesian methods for inverse problems

Lian, Duan January 2013 (has links)
This thesis describes two novel Bayesian methods: the Iterative Ensemble Square Filter (IEnSRF) and the Warp Ensemble Square Root Filter (WEnSRF) for solving the barcode detection problem, the deconvolution problem in well testing and the history matching problem of facies patterns. For the barcode detection problem, at the expanse of overestimating the posterior uncertainty, the IEnSRF efficiently achieves successful detections with very challenging real barcode images which the other considered methods and commercial software fail to detect. It also performs reliable detection on low-resolution images under poor ambient light conditions. For the deconvolution problem in well testing, the IEnSRF is capable of quantifying estimation uncertainty, incorporating the cumulative production data and estimating the initial pressure, which were thought to be unachievable in the existing well testing literature. The estimation results for the considered real benchmark data using the IEnSRF significantly outperform the existing methods in the commercial software. The WEnSRF is utilised for solving the history matching problem of facies patterns. Through the warping transformation, the WEnSRF performs adjustment on the reservoir features directly and is thus superior in estimating the large-scale complicated facies patterns. It is able to provide accurate estimates of the reservoir properties robustly and efficiently with reasonably reliable prior reservoir structural information.
18

On general error cancellation based logic transformations: the theory and techniques. / 基於錯誤取消的邏輯轉換: 理論與技術 / CUHK electronic theses & dissertations collection / Ji yu cuo wu qu xiao de luo ji zhuan huan: li lun yu ji shu

January 2011 (has links)
Yang, Xiaoqing. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (leaves 113-120). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
19

Test of the Generalizability Of "KBIT" (an Artificial Intelligence-Derived Assessment Instrument) Across Medical Problems

Papa, Frank J. 05 1900 (has links)
This study was motivated by concerns within the medical education community regarding the psychometric soundness of current assessment methodologies. More specifically, there is reason to seriously question the reliablity and/or validity of these methodologies in assessing the intellectual skills upon which medical competence is based.
20

Machine Learning model applied to Reactor Dynamics / Maskininlärningsmodel Tillämpad på Reaktor Dynamik

Nikitopoulos, Dionysios Dimitrios January 2023 (has links)
This project’s idea revolved around utilizing the most recent techniques in MachineLearning, Neural Networks, and Data processing to construct a model to be used asa tool to determine stability during core design work. This goal will be achieved bycollecting distribution profiles describing the core state from different steady statesin five burn-up cycles in a reactor to serve as the dataset for training the model. Anadditional cycle will be reserved as a blind testing dataset for the trained model topredict. The variables that will be the target for the predictions are the decay ratioand the frequency since they describe the core stability.The distribution profiles extracted from the core simulator POLCA7 were subjectedto many different Data processing techniques to isolate the most relevant variablesto stability. The processed input variables were merged with the decay ratio andfrequency for those cases, as calculated with POLCA-T. Two different MachineLearning models, one for each output parameter, were designed with Pytorch toanalyze those labeled datasets. The goal of the project was to predict the outputvariables with an error lower than 0.1 for decay ratio and 0.05 for frequency. Themodels were able to predict the testing data with an RMSE of 0.0767 for decay ratioand 0.0354 for frequency.Finally, the trained models were saved and tasked with predicting the outputparameters for a completely unknown cycle. The RMSE was even better forthe unknown cycle, with 0.0615 for decay ratio and 0.0257 for frequency,respectively. / Idén bakom detta projekt var att använda de senaste teknikerna inom maskininlärning, neurala nätverk och databehandling för att konstruera en modell att använda som ett verktyg för att avgöra stabilitet under härddesignsarbete. Detta mål kommer uppnås genom att samla distribueringsprofiler av härdens tillstånd från olika stabila lägen i fem förbränningscyklar (burn-up cycles) i en reaktor, som tjänar som en datamängd att träna modellen på.En sjätte förbränningscykel användes som en datamängd för ett blindprov som den tränade modellen ska förutse. Variablerna som kommer tjäna som mål för förutsägelserna är sönderfallsförhållandet (decay ratio) och frekvensen, då dessa beskriver härdens stabilitet. Distribueringsprofilerna som extraherats från härdsimulatorn POLCA7 utsattes för många olika databehandlingstekniker för att isolera de mest relevanta variablerna för stabilitet. De behandlade indatavariablerna blandades med sönderfallsförhållandet och frekvensen för dessa fall, som beräknats med POLCA-T. Två olika maskininlärningsmodeller, en för varje utdataparameter, designades med Pytorch för att analysera dessa märkta datamängder. Projektets mål var att förutse utdatavariablerna med ett fel under 0.1 för sönderfallsförhållandet och 0.05 för frekvensen. Modellerna lyckades förutse testdatan med en RMSE på 0.0767 för sönderfallsförhållande och 0.0354 för frekvensen.Slutligen sparades de tränade modellerna och gavs uppgiften att förutse utdataparametrarna för en komplett okänd cykel. För den okända cykeln var RMSE ännu lägre, med 0.0615 för sönderfallsförhållande och 0.0257 för frekvensen.

Page generated in 0.0966 seconds