• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 77
  • 13
  • 13
  • 5
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 137
  • 137
  • 66
  • 55
  • 38
  • 18
  • 16
  • 14
  • 13
  • 10
  • 10
  • 9
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

<b>Evaluation of the Accuracy of Non-Destructive Testing (NDT) Methods for the Condition Assessment of Bridge Decks</b>

Elijah Donovan Jennings (19334296) 06 August 2024 (has links)
<p dir="ltr">Bridge decks in Indiana face the brunt of the deterioration mechanisms associated with structural deficiencies. These deficiencies do not always present themselves in noticeable ways, however, their detection is imperative to the performance of the deck and the bridges’ overall health. The inspection of these bridge decks presents engineers with not only a timely, but dangerous process as maintenance of traffic (MOT) from the states’ department of transportation (DOT) is not a viable option for most inspections. This results in engineers taking an unnecessary risk to inspect these decks for deteriorations. The most detrimental of these structural deficiencies, delaminations, do not always result in visual confirmation. Leading to more time spent in the roadway trying to sound for these defects. This thesis introduces a state-of-the-art review of previous NDT studies in relation to bridge structures along with the validation of their results. Background information on all testing methods being evaluated will also be provided in this study. This thesis also presents an in depth investigation using multiple consultants and a variety of NDT methods to assess the viability of delamination detection in relation to these methods. These methods were verified through coring at select locations on the deck. This thesis then discusses the practical implications of these NDT methods that provide an accurate level of delamination detection on project and network level inspections.</p>
132

Modèle dans le domaine temporel et la validation expérimentale d’un scanner ultrasonore à ondes de surface sans contact / Time domain model and experimental validation of the non-contact surface wave ultrasonic scanner

Li, Ji 20 December 2017 (has links)
Ce travail de recherche propose l’algorithme de calcul pour la modélisation d’un scanner ultrasonore sans contact à ondes de surface. L’approche proposée permet de prendre en compte l’ouverture finie du récepteur, l’atténuation d’air et la réponse électrique he de l’ensemble émetteur-récepteur. Le milieu avec l’atténuation (air et milieu testé) est modélisé dans le domaine temporel à l’aide de la fonction de Green causale permettant la caractérisation large bande. Le réponse he est déterminée de manière expérimentale en utilisant la procédure spatialement développée, incluant la déconvolution des effets d’atténuation. Le modèle est implémenté numériquement en utilisant l’approche de la Représentation Discrète et les résultats obtenus sont validés expérimentalement. La technique chirp est utilisée afin d’améliorer le rapport signal/bruit. Il est démontré que lorsque l’atténuation dans l’air, la dimension de récepteur et la réponse he reconstituée avec précision sont correctement pris en compte, la réponse impulsionnelle du système peut être prédite avec l’erreur de 2-5 %. L’introduction de la taille du récepteur est essentielle pour la prédiction dans le champ proche. Le temps de calcul obtenu est considérablement plus court que le temps nécessaire pour les méthodes FEM. A l’aide de ce modèle l’influence des réglages du scanner est étudiée. Les résultats obtenus permettent de formuler des recommandations pour les réglages optimaux / In this research the time-domain model for the prediction of an acoustic field in an air-coupled, non-contact, surface wave scanner is proposed. The model takes into account the finite size of the aperture receiver, attenuation in air, and the electric response he of the emitter-receiver set he. The attenuation is characterized by a causal time-domain Green’s function, allowing the wideband attenuation of a lossy medium (air and solid tested sample) obeying the power law to be modelled. The response he is recovered experimentally using an original especially developed procedure which includes the deconvolution of air absorption effects. The model is implemented numerically using a Discrete Representation approach and validated experimentally. In order to improve the signal to noise ratio the chirp technique is used. It is shown that when the attenuation in air, the receiver size, and the accurately recovered response he, are correctly taken into account, the model allows the system’s impulse response to be very accurately predicted, with errors ranging between 2-5%. Inclusion of the size of the receiver dimension in the model appears to be crucial to the accuracy of the near field predictions. The obtained computation efficiency is much better that efficiency of FEM methods. The influence of typical user defined settings has been investigated. The obtained conclusions will be used as the recommendations for further use
133

Geotechnical Site Characterization And Liquefaction Evaluation Using Intelligent Models

Samui, Pijush 02 1900 (has links)
Site characterization is an important task in Geotechnical Engineering. In situ tests based on standard penetration test (SPT), cone penetration test (CPT) and shear wave velocity survey are popular among geotechnical engineers. Site characterization using any of these properties based on finite number of in-situ test data is an imperative task in probabilistic site characterization. These methods have been used to design future soil sampling programs for the site and to specify the soil stratification. It is never possible to know the geotechnical properties at every location beneath an actual site because, in order to do so, one would need to sample and/or test the entire subsurface profile. Therefore, the main objective of site characterization models is to predict the subsurface soil properties with minimum in-situ test data. The prediction of soil property is a difficult task due to the uncertainities. Spatial variability, measurement ‘noise’, measurement and model bias, and statistical error due to limited measurements are the sources of uncertainities. Liquefaction in soil is one of the other major problems in geotechnical earthquake engineering. It is defined as the transformation of a granular material from a solid to a liquefied state as a consequence of increased pore-water pressure and reduced effective stress. The generation of excess pore pressure under undrained loading conditions is a hallmark of all liquefaction phenomena. This phenomena was brought to the attention of engineers more so after Niigata(1964) and Alaska(1964) earthquakes. Liquefaction will cause building settlement or tipping, sand boils, ground cracks, landslides, dam instability, highway embankment failures, or other hazards. Such damages are generally of great concern to public safety and are of economic significance. Site-spefific evaluation of liquefaction susceptibility of sandy and silty soils is a first step in liquefaction hazard assessment. Many methods (intelligent models and simple methods as suggested by Seed and Idriss, 1971) have been suggested to evaluate liquefaction susceptibility based on the large data from the sites where soil has been liquefied / not liquefied. The rapid advance in information processing systems in recent decades directed engineering research towards the development of intelligent models that can model natural phenomena automatically. In intelligent model, a process of training is used to build up a model of the particular system, from which it is hoped to deduce responses of the system for situations that have yet to be observed. Intelligent models learn the input output relationship from the data itself. The quantity and quality of the data govern the performance of intelligent model. The objective of this study is to develop intelligent models [geostatistic, artificial neural network(ANN) and support vector machine(SVM)] to estimate corrected standard penetration test (SPT) value, Nc, in the three dimensional (3D) subsurface of Bangalore. The database consists of 766 boreholes spread over a 220 sq km area, with several SPT N values (uncorrected blow counts) in each of them. There are total 3015 N values in the 3D subsurface of Bangalore. To get the corrected blow counts, Nc, various corrections such as for overburden stress, size of borehole, type of sampler, hammer energy and length of connecting rod have been applied on the raw N values. Using a large database of Nc values in the 3D subsurface of Bangalore, three geostatistical models (simple kriging, ordinary kriging and disjunctive kriging) have been developed. Simple and ordinary kriging produces linear estimator whereas, disjunctive kriging produces nonlinear estimator. The knowledge of the semivariogram of the Nc data is used in the kriging theory to estimate the values at points in the subsurface of Bangalore where field measurements are not available. The capability of disjunctive kriging to be a nonlinear estimator and an estimator of the conditional probability is explored. A cross validation (Q1 and Q2) analysis is also done for the developed simple, ordinary and disjunctive kriging model. The result indicates that the performance of the disjunctive kriging model is better than simple as well as ordinary kriging model. This study also describes two ANN modelling techniques applied to predict Nc data at any point in the 3D subsurface of Bangalore. The first technique uses four layered feed-forward backpropagation (BP) model to approximate the function, Nc=f(x, y, z) where x, y, z are the coordinates of the 3D subsurface of Bangalore. The second technique uses generalized regression neural network (GRNN) that is trained with suitable spread(s) to approximate the function, Nc=f(x, y, z). In this BP model, the transfer function used in first and second hidden layer is tansig and logsig respectively. The logsig transfer function is used in the output layer. The maximum epoch has been set to 30000. A Levenberg-Marquardt algorithm has been used for BP model. The performance of the models obtained using both techniques is assessed in terms of prediction accuracy. BP ANN model outperforms GRNN model and all kriging models. SVM model, which is firmly based on the theory of statistical learning theory, uses regression technique by introducing -insensitive loss function has been also adopted to predict Nc data at any point in 3D subsurface of Bangalore. The SVM implements the structural risk minimization principle (SRMP), which has been shown to be superior to the more traditional empirical risk minimization principle (ERMP) employed by many of the other modelling techniques. The present study also highlights the capability of SVM over the developed geostatistic models (simple kriging, ordinary kriging and disjunctive kriging) and ANN models. Further in this thesis, Liquefaction susceptibility is evaluated from SPT, CPT and Vs data using BP-ANN and SVM. Intelligent models (based on ANN and SVM) are developed for prediction of liquefaction susceptibility using SPT data from the 1999 Chi-Chi earthquake, Taiwan. Two models (MODEL I and MODEL II) are developed. The SPT data from the work of Hwang and Yang (2001) has been used for this purpose. In MODEL I, cyclic stress ratio (CSR) and corrected SPT values (N1)60 have been used for prediction of liquefaction susceptibility. In MODEL II, only peak ground acceleration (PGA) and (N1)60 have been used for prediction of liquefaction susceptibility. Further, the generalization capability of the MODEL II has been examined using different case histories available globally (global SPT data) from the work of Goh (1994). This study also examines the capabilities of ANN and SVM to predict the liquefaction susceptibility of soils from CPT data obtained from the 1999 Chi-Chi earthquake, Taiwan. For determination of liquefaction susceptibility, both ANN and SVM use the classification technique. The CPT data has been taken from the work of Ku et al.(2004). In MODEL I, cone tip resistance (qc) and CSR values have been used for prediction of liquefaction susceptibility (using both ANN and SVM). In MODEL II, only PGA and qc have been used for prediction of liquefaction susceptibility. Further, developed MODEL II has been also applied to different case histories available globally (global CPT data) from the work of Goh (1996). Intelligent models (ANN and SVM) have been also adopted for liquefaction susceptibility prediction based on shear wave velocity (Vs). The Vs data has been collected from the work of Andrus and Stokoe (1997). The same procedures (as in SPT and CPT) have been applied for Vs also. SVM outperforms ANN model for all three models based on SPT, CPT and Vs data. CPT method gives better result than SPT and Vs for both ANN and SVM models. For CPT and SPT, two input parameters {PGA and qc or (N1)60} are sufficient input parameters to determine the liquefaction susceptibility using SVM model. In this study, an attempt has also been made to evaluate geotechnical site characterization by carrying out in situ tests using different in situ techniques such as CPT, SPT and multi channel analysis of surface wave (MASW) techniques. For this purpose a typical site was selected wherein a man made homogeneous embankment and as well natural ground has been met. For this typical site, in situ tests (SPT, CPT and MASW) have been carried out in different ground conditions and the obtained test results are compared. Three CPT continuous test profiles, fifty-four SPT tests and nine MASW test profiles with depth have been carried out for the selected site covering both homogeneous embankment and natural ground. Relationships have been developed between Vs, (N1)60 and qc values for this specific site. From the limited test results, it was found that there is a good correlation between qc and Vs. Liquefaction susceptibility is evaluated using the in situ test data from (N1)60, qc and Vs using ANN and SVM models. It has been shown to compare well with “Idriss and Boulanger, 2004” approach based on SPT test data. SVM model has been also adopted to determine over consolidation ratio (OCR) based on piezocone data. Sensitivity analysis has been performed to investigate the relative importance of each of the input parameters. SVM model outperforms all the available methods for OCR prediction.
134

Wave-Cavity Resonator: Experimental Investigation of an Alternative Energy Device

Reaume, Jonathan Daniel 21 December 2015 (has links)
A wave cavity resonator (WCR) is investigated to determine the suitability of the device as an energy harvester in rivers or tidal flows. The WCR consists of coupling between self-excited oscillations of turbulent flow of water in an open channel along the opening of a rectangular cavity and the standing gravity wave in the cavity. The device was investigated experimentally for a range of inflow velocities, cavity opening lengths, and characteristic depths of the water. Determining appropriate models and empirical relations for the system over a range of depths allows for accuracy when designing prototypes and tools for determining the suitability of a particular river or tidal flow as a potential WCR site. The performance of the system when coupled with a wave absorber/generator is also evaluated for a range piston strokes in reference to cavity wave height. Video recording of the oscillating free-surface inside the resonator cavity in conjunction with free-surface elevation measurements using a capacitive wave gauge provides representation of the resonant wave modes of the cavity as well as the degree of the flow-wave coupling in terms of the amplitude and the quality factor of the associated spectral peak. Moreover, application of digital particle image velocimetry (PIV) provides insight into the evolution of the vortical structures that form across the cavity opening. Coherent oscillations were attainable for a wide range of water depths. Variation of the water depth affected the degree of coupling between the shear layer oscillations and the gravity wave as well as the three-dimensionality of the flow structure. In terms of the power investigation, conducted with the addition of a load cell and linear table-driven piston, the device is likely limited to running low power instrumentation unless it can be up-scaled. Up-scaling of the system, while requiring additional design considerations, is not unreasonable; large-scale systems of resonant water waves and the generation of large scale vortical structures due to tidal or river flows are even observed naturally. / Graduate / 0547 / 0548 / reaumejd@uvic.ca
135

Installation and Operation of Air-Sea Flux Measuring System on Board Indian Research Ships

Kumar, Vijay January 2017 (has links) (PDF)
Exchange of mass (water vapor), momentum, and energy between atmosphere andocean has profound influence on weather and climate. This exchange takes place at the air-sea interface, which is part of the marine atmospheric boundary layer. Various empirical relations are being used for estimating these fluxes in numericalweather and climate models but their accuracies are not sufficiently verified or tested over the Indian Ocean. The main difficulty is that vast areas of open oceans are not easily accessible. The marine environment is very corrosive and unattended long term and accurate measurements are extremely expensive. India has research ships that spend most of their time over the seas around India but that opportunity is yet to be exploited. To address this, an air-sea flux measurement system for operation on board research ships was planned. The system was tested on board Indian Research Vessels ORV SagarKanya during its cruise SK-296 in the Bay of Bengal (BoB) in July-August 2012, and NIO ship Sindhu Sadhana in June-July 2016. The complete set included instruments for measuring wind velocity, windspeed and direction, air and water temperature, humidity, pressure, all components of radiation and rainfall. In addition, ship motion was recorded at required sampling rate to correct for wind velocity. The set up facilitates the direct computation of sensible and latent heat fluxes using the eddy covariance method. In this thesis, design and installation of meteorological and ship motion sensors onboard research ships, data collection and quality control, computation of fluxes of heat, moisture and momentum using eddy covariance method and their comparison with those derived from bulk method are described. A set of sensors (hereafter, flux measuring system) were mounted on a retractable boom, ~7 m long forward of the bow to minimize the flow disturbance caused by the ship superstructures. The wind observed in the ship frame was corrected for ship motion contaminations. During the CTCZ cruise period true mean wind speed was over 10 m/s and true wind direction was South/South-Westerly. True windspeedis computed combiningdata from the anemometer a compass connected to AWS and a GPS. Turbulent fluxes were computed from motion-corrected time-series of high frequency velocity, water vapor, and air temperature data. Covariance latent heat flux, sensible heat flux, and wind stress were obtained by cross-correlating the motion-corrected vertical velocity with fast humidity fluctuations measured with anIR hygrometer, temperate fluctuation from sonic anemometer and motion-corrected horizontal windfluctuations from sonic anemometer, respectively. During the first attempt made in July-August 2012 as part of a cruise of CTCZ monsoonresearch program, observations were mainly taken in the North Bay of Bengal. The mean air-temperature and surface pressure were ~28 Deg C and ~998 hPa, respectively. Relative humidity was ~80%. Average wind speed varied in the range 4-12 m/s. The mean latent heat flux was 145 W/m2 , sensible heat flux was ~3 W/m2 and average sea-air temperature difference was ~ 0.7°C. The Bay of Bengal boundary layer experiment (BoBBLE) was conducted during June-July 2016 and the NIO research ship Sindhu Sadhana was deployed. The same suite of sensors installed during CTCZ were used during BoBBLE. During daytime, peaks of hourly net heat fluxes (Qnet ) were around 600 Wm-2(positive if into the sea), whereas, night time values were around -250 W m-2. Sea surface temperature was always >28°C and maximum air temperature exceeded 29°C. During the experimental period the mean Qnet was around -24 Wm-2 from both eddy covariance and conventional bulk methods, but there are significant differences on individual days.The new flux system gives fluxes which are superior to what was available before.
136

Neuronové modelování elektromegnetických polí uvnitř automobilů / Neural Modeling of Electromagnetic Fields in Cars

Kotol, Martin January 2018 (has links)
Disertační práce se věnuje využití umělých neuronových sítí pro modelování elektromagnetických polí uvnitř automobilů. První část práce je zaměřena na analytický popis šíření elektromagnetických vlny interiérem pomocí Nortonovy povrchové vlny. Následující část práce se věnuje praktickému měření a ověření analytických modelů. Praktická měření byla zdrojem trénovacích a verifikačních dat pro neuronové sítě. Práce se zaměřuje na kmitočtová pásma 3 až 11 GHz a 55 až 65 GHz.
137

Cooperative wireless channel characterization and modeling: application to body area and cellular networks

Liu, Lingfeng 23 March 2012 (has links)
Cooperative wireless communication is an attractive technique to explore the spatial channel resources by coordination across multiple links, which can greatly improve the communication performance over single links. In this dissertation, we study the cooperative multi-link channel properties by geometric approaches in body area networks (BANs) and cellular networks respectively.<p><p>In the part of BANs, the dynamic narrowband on-body channels under body motions are modeled statistically on their temporal and spatial fading based on anechoic and indoor measurements. Common body scattering is observed to form inter-link correlation between links closely distributed and between links having synchronized movements of communication nodes. An analytical model is developed to explain the physical mechanisms of the dynamic body scattering. The on-body channel impacts to simple cooperation protocols are evaluated based on realistic measurements. <p><p>In the part of cellular networks, the cluster-level multi-link COST 2100 MIMO channel model is developed with concrete modeling concepts, complete parameterization and implementation methods, and a compatible structure for both single-link and multi-link scenarios. The cluster link-commonness is introduced to the model to describe the multi-link properties. The multi-link impacts by the model are also evaluated in a distributed MIMO system by comparing its sum-rate capacity at different ratios of cluster link-commonness. / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished

Page generated in 0.0632 seconds