• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 7
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 35
  • 35
  • 35
  • 35
  • 8
  • 7
  • 7
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

<b>Development of Innovative Hardwood Products</b>

Jue Mo (18416235) 22 April 2024 (has links)
<p dir="ltr">In response to the growing significance of wood as a sustainable resource and the challenges within the wood products industry, there is a pressing need for innovation and collaboration across sectors. This study underscores the importance of mapping the wood products industry to gain a comprehensive understanding of material flows, which is essential for educational and research endeavors. The findings aim to uncover new economic opportunities and advocate for sustainable resource management. To address the complexities of the wood products industry, we developed a Generic Map, including a version tailored for the U.S. hardwood sector. Moreover, Dive-in Chain Maps were introduced to elaborate on the main production chains: Sawmill (I), Veneer Mill (II), Reconstituted Wood Manufacturing (III), and Pulp and Paper Mill (IV).</p><p dir="ltr">The study suggests four strategies to augment the value of hardwood through production, design, material modification, and by-products management. We showcased some strategies through two case studies.</p><p dir="ltr">The first focuses on Cross-laminated Timber (CLT), demonstrating value addition to hardwood. We conducted a literature review on the availability of raw materials in the US region and evaluated their performance across various stages of laboratory testing. This was followed by evaluating the feasibility and environmental effects of utilizing yellow poplar for CLT production. Additionally, we compared the Life Cycle Analysis (LCA) outcomes of yellow poplar CLT with those of traditional softwood CLT. This comparison aims to provide further insights for developing future by-product management or end-of-life strategies.</p><p dir="ltr">The second case study examines thermal modification, proposing an innovative method for efficient thermal treatment and employing an Artificial Neural Network (ANN) model to analyze the correlation between temperature, duration, and color change. We also compared the physical and mechanical properties of surface thermally treated samples to those of traditionally treated ones, discussing how different thermal treatments affect material properties.</p><p dir="ltr">Our findings illuminate the path for effective material flow and utilization, unveiling avenues for innovation and the creation of high-value products. Furthermore, the study provides strategies for waste minimization and informed end-of-life decision-making, thereby enhancing circularity and sustainability in the wood products industry.</p>
32

Réalisation d'un réseau de neurones "SOM" sur une architecture matérielle adaptable et extensible à base de réseaux sur puce "NoC" / Neural Network Implementation on an Adaptable and Scalable Hardware Architecture based-on Network-on-Chip

Abadi, Mehdi 07 July 2018 (has links)
Depuis son introduction en 1982, la carte auto-organisatrice de Kohonen (Self-Organizing Map : SOM) a prouvé ses capacités de classification et visualisation des données multidimensionnelles dans différents domaines d’application. Les implémentations matérielles de la carte SOM, en exploitant le taux de parallélisme élevé de l’algorithme de Kohonen, permettent d’augmenter les performances de ce modèle neuronal souvent au détriment de la flexibilité. D’autre part, la flexibilité est offerte par les implémentations logicielles qui quant à elles ne sont pas adaptées pour les applications temps réel à cause de leurs performances temporelles limitées. Dans cette thèse nous avons proposé une architecture matérielle distribuée, adaptable, flexible et extensible de la carte SOM à base de NoC dédiée pour une implantation matérielle sur FPGA. A base de cette approche, nous avons également proposé une architecture matérielle innovante d’une carte SOM à structure croissante au cours de la phase d’apprentissage / Since its introduction in 1982, Kohonen’s Self-Organizing Map (SOM) showed its ability to classify and visualize multidimensional data in various application fields. Hardware implementations of SOM, by exploiting the inherent parallelism of the Kohonen algorithm, allow to increase the overall performances of this neuronal network, often at the expense of the flexibility. On the other hand, the flexibility is offered by software implementations which on their side are not suited for real-time applications due to the limited time performances. In this thesis we proposed a distributed, adaptable, flexible and scalable hardware architecture of SOM based on Network-on-Chip (NoC) designed for FPGA implementation. Moreover, based on this approach we also proposed a novel hardware architecture of a growing SOM able to evolve its own structure during the learning phase
33

Geotechnical Site Characterization And Liquefaction Evaluation Using Intelligent Models

Samui, Pijush 02 1900 (has links)
Site characterization is an important task in Geotechnical Engineering. In situ tests based on standard penetration test (SPT), cone penetration test (CPT) and shear wave velocity survey are popular among geotechnical engineers. Site characterization using any of these properties based on finite number of in-situ test data is an imperative task in probabilistic site characterization. These methods have been used to design future soil sampling programs for the site and to specify the soil stratification. It is never possible to know the geotechnical properties at every location beneath an actual site because, in order to do so, one would need to sample and/or test the entire subsurface profile. Therefore, the main objective of site characterization models is to predict the subsurface soil properties with minimum in-situ test data. The prediction of soil property is a difficult task due to the uncertainities. Spatial variability, measurement ‘noise’, measurement and model bias, and statistical error due to limited measurements are the sources of uncertainities. Liquefaction in soil is one of the other major problems in geotechnical earthquake engineering. It is defined as the transformation of a granular material from a solid to a liquefied state as a consequence of increased pore-water pressure and reduced effective stress. The generation of excess pore pressure under undrained loading conditions is a hallmark of all liquefaction phenomena. This phenomena was brought to the attention of engineers more so after Niigata(1964) and Alaska(1964) earthquakes. Liquefaction will cause building settlement or tipping, sand boils, ground cracks, landslides, dam instability, highway embankment failures, or other hazards. Such damages are generally of great concern to public safety and are of economic significance. Site-spefific evaluation of liquefaction susceptibility of sandy and silty soils is a first step in liquefaction hazard assessment. Many methods (intelligent models and simple methods as suggested by Seed and Idriss, 1971) have been suggested to evaluate liquefaction susceptibility based on the large data from the sites where soil has been liquefied / not liquefied. The rapid advance in information processing systems in recent decades directed engineering research towards the development of intelligent models that can model natural phenomena automatically. In intelligent model, a process of training is used to build up a model of the particular system, from which it is hoped to deduce responses of the system for situations that have yet to be observed. Intelligent models learn the input output relationship from the data itself. The quantity and quality of the data govern the performance of intelligent model. The objective of this study is to develop intelligent models [geostatistic, artificial neural network(ANN) and support vector machine(SVM)] to estimate corrected standard penetration test (SPT) value, Nc, in the three dimensional (3D) subsurface of Bangalore. The database consists of 766 boreholes spread over a 220 sq km area, with several SPT N values (uncorrected blow counts) in each of them. There are total 3015 N values in the 3D subsurface of Bangalore. To get the corrected blow counts, Nc, various corrections such as for overburden stress, size of borehole, type of sampler, hammer energy and length of connecting rod have been applied on the raw N values. Using a large database of Nc values in the 3D subsurface of Bangalore, three geostatistical models (simple kriging, ordinary kriging and disjunctive kriging) have been developed. Simple and ordinary kriging produces linear estimator whereas, disjunctive kriging produces nonlinear estimator. The knowledge of the semivariogram of the Nc data is used in the kriging theory to estimate the values at points in the subsurface of Bangalore where field measurements are not available. The capability of disjunctive kriging to be a nonlinear estimator and an estimator of the conditional probability is explored. A cross validation (Q1 and Q2) analysis is also done for the developed simple, ordinary and disjunctive kriging model. The result indicates that the performance of the disjunctive kriging model is better than simple as well as ordinary kriging model. This study also describes two ANN modelling techniques applied to predict Nc data at any point in the 3D subsurface of Bangalore. The first technique uses four layered feed-forward backpropagation (BP) model to approximate the function, Nc=f(x, y, z) where x, y, z are the coordinates of the 3D subsurface of Bangalore. The second technique uses generalized regression neural network (GRNN) that is trained with suitable spread(s) to approximate the function, Nc=f(x, y, z). In this BP model, the transfer function used in first and second hidden layer is tansig and logsig respectively. The logsig transfer function is used in the output layer. The maximum epoch has been set to 30000. A Levenberg-Marquardt algorithm has been used for BP model. The performance of the models obtained using both techniques is assessed in terms of prediction accuracy. BP ANN model outperforms GRNN model and all kriging models. SVM model, which is firmly based on the theory of statistical learning theory, uses regression technique by introducing -insensitive loss function has been also adopted to predict Nc data at any point in 3D subsurface of Bangalore. The SVM implements the structural risk minimization principle (SRMP), which has been shown to be superior to the more traditional empirical risk minimization principle (ERMP) employed by many of the other modelling techniques. The present study also highlights the capability of SVM over the developed geostatistic models (simple kriging, ordinary kriging and disjunctive kriging) and ANN models. Further in this thesis, Liquefaction susceptibility is evaluated from SPT, CPT and Vs data using BP-ANN and SVM. Intelligent models (based on ANN and SVM) are developed for prediction of liquefaction susceptibility using SPT data from the 1999 Chi-Chi earthquake, Taiwan. Two models (MODEL I and MODEL II) are developed. The SPT data from the work of Hwang and Yang (2001) has been used for this purpose. In MODEL I, cyclic stress ratio (CSR) and corrected SPT values (N1)60 have been used for prediction of liquefaction susceptibility. In MODEL II, only peak ground acceleration (PGA) and (N1)60 have been used for prediction of liquefaction susceptibility. Further, the generalization capability of the MODEL II has been examined using different case histories available globally (global SPT data) from the work of Goh (1994). This study also examines the capabilities of ANN and SVM to predict the liquefaction susceptibility of soils from CPT data obtained from the 1999 Chi-Chi earthquake, Taiwan. For determination of liquefaction susceptibility, both ANN and SVM use the classification technique. The CPT data has been taken from the work of Ku et al.(2004). In MODEL I, cone tip resistance (qc) and CSR values have been used for prediction of liquefaction susceptibility (using both ANN and SVM). In MODEL II, only PGA and qc have been used for prediction of liquefaction susceptibility. Further, developed MODEL II has been also applied to different case histories available globally (global CPT data) from the work of Goh (1996). Intelligent models (ANN and SVM) have been also adopted for liquefaction susceptibility prediction based on shear wave velocity (Vs). The Vs data has been collected from the work of Andrus and Stokoe (1997). The same procedures (as in SPT and CPT) have been applied for Vs also. SVM outperforms ANN model for all three models based on SPT, CPT and Vs data. CPT method gives better result than SPT and Vs for both ANN and SVM models. For CPT and SPT, two input parameters {PGA and qc or (N1)60} are sufficient input parameters to determine the liquefaction susceptibility using SVM model. In this study, an attempt has also been made to evaluate geotechnical site characterization by carrying out in situ tests using different in situ techniques such as CPT, SPT and multi channel analysis of surface wave (MASW) techniques. For this purpose a typical site was selected wherein a man made homogeneous embankment and as well natural ground has been met. For this typical site, in situ tests (SPT, CPT and MASW) have been carried out in different ground conditions and the obtained test results are compared. Three CPT continuous test profiles, fifty-four SPT tests and nine MASW test profiles with depth have been carried out for the selected site covering both homogeneous embankment and natural ground. Relationships have been developed between Vs, (N1)60 and qc values for this specific site. From the limited test results, it was found that there is a good correlation between qc and Vs. Liquefaction susceptibility is evaluated using the in situ test data from (N1)60, qc and Vs using ANN and SVM models. It has been shown to compare well with “Idriss and Boulanger, 2004” approach based on SPT test data. SVM model has been also adopted to determine over consolidation ratio (OCR) based on piezocone data. Sensitivity analysis has been performed to investigate the relative importance of each of the input parameters. SVM model outperforms all the available methods for OCR prediction.
34

Fracture Characteristics Of Self Consolidating Concrete

Naddaf, Hamid Eskandari 07 1900 (has links)
Self-consolidating concrete (SCC) has wide use for placement in congested reinforced concrete structures in recent years. SCC represents one of the most outstanding advances in concrete technology during the last two decades. In the current work a great deal of cognizance pertaining to mechanical properties of SCC and comparison of fracture characteristics of notched and unnotched beams of plain concrete as well as using acoustic emission to understand the localization of crack patterns at different stages has been done. An artificial neural network (ANN) is proposed to predict the 28day compressive strength of a normal and high strength of SCC and HPC with high volume fly ash. The ANN is trained by the data available in literature on normal volume fly ash because data on SCC with high volume fly ash is not available in sufficient quantity. Fracture characteristics of notched and unnotched beams of plain self consolidating concrete using acoustic emission to understand the localization of crack patterns at different stages has been done. Considering this as a platform, further analysis has been done using moment tensor analysis as a new notion to evaluate fracture characteristics in terms of crack orientation, direction of crack propagation at nano and micro levels. Analysis of B-value (b-value based on energy) is also carried out, and this has introduced to a new idea of carrying out the analysis on the basis of energy which gives a clear picture of results when compared with the analysis carried out using amplitudes. Further a new concept is introduced to analyze crack smaller than micro (could be hepto cracks) in solid materials. Each crack formation corresponds to an AE event and is processed and analyzed for crack orientation, crack volume at hepto and micro levels using moment tensor analysis based on energy. Cracks which are tinier than microcracks (could be hepto), are formed in large numbers at very early stages of loading prior to peak load. The volume of hepto and micro cracks is difficult to measure physically, but could be characterized using AE data in moment tensor analysis based on energy. It is conjectured that the ratio of the volume of hepto to that of micro could reach a critical value which could be an indicator of onset of microcracks after the formation of hepto cracks.
35

Applied Machine Learning Predicts the Postmortem Interval from the Metabolomic Fingerprint

Arpe, Jenny January 2024 (has links)
In forensic autopsies, accurately estimating the postmortem interval (PMI) is crucial. Traditional methods, relying on physical parameters and police data, often lack precision, particularly after approximately two days have passed since the person's death. New methods are increasingly focusing on analyzing postmortem metabolomics in biological systems, acting as a 'fingerprint' of ongoing processes influenced by internal and external molecules. By carefully analyzing these metabolomic profiles, which span a diverse range of information from events preceding death to postmortem changes, there is potential to provide more accurate estimates of the PMI. The limitation of available real human data has hindered comprehensive investigation until recently. Large-scale metabolomic data collected by the National Board of Forensic Medicine (RMV, Rättsmedicinalverket) presents a unique opportunity for predictive analysis in forensic science, enabling innovative approaches for improving  PMI estimation. However, the metabolomic data appears to be large, complex, and potentially nonlinear, making it difficult to interpret. This underscores the importance of effectively employing machine learning algorithms to manage metabolomic data for the purpose of PMI predictions, the primary focus of this project.  In this study, a dataset consisting of 4,866 human samples and 2,304 metabolites from the RMV was utilized to train a model capable of predicting the PMI. Random Forest (RF) and Artificial Neural Network (ANN) models were then employed for PMI prediction. Furthermore, feature selection and incorporating sex and age into the model were explored to improve the neural network's performance.  This master's thesis shows that ANN consistently outperforms RF in PMI estimation, achieving an R2 of 0.68 and an MAE of 1.51 days compared to RF's R2 of 0.43 and MAE of 2.0 days across the entire PMI-interval. Additionally, feature selection indicates that only 35% of total metabolites are necessary for comparable results with maintained predictive accuracy. Furthermore, Principal Component Analysis (PCA) reveals that these informative metabolites are primarily located within a specific cluster on the first and second principal components (PC), suggesting a need for further research into the biological context of these metabolites.  In conclusion, the dataset has proven valuable for predicting PMI. This indicates significant potential for employing machine learning models in PMI estimation, thereby assisting forensic pathologists in determining the time of death. Notably, the model shows promise in surpassing current methods and filling crucial gaps in the field, representing an important step towards achieving accurate PMI estimations in forensic practice. This project suggests that machine learning will play a central role in assisting with determining time since death in the future.

Page generated in 0.061 seconds