Spelling suggestions: "subject:"aprocessing techniques"" "subject:"eprocessing techniques""
1 |
Solidification of metals and alloys far from equilibriumEvans, Paul Vincent January 1988 (has links)
No description available.
|
2 |
Gear condition monitoring by wavelet transform of vibration signalsLin, Shui-Town January 1996 (has links)
No description available.
|
3 |
Resynthesis of speechOwens, F. J. January 1980 (has links)
No description available.
|
4 |
Advanced Pre-processing Techniques for cloud-based Degradation Detection using Artificial Intelligence (AI)Seddik, Essam January 2021 (has links)
Predictive maintenance is extremely important to fleet owners. On-duty automobile engine failures add cost of extra towing, gas and labor expenses which can add up to millions of dollars every year. Early knowledge of upcoming failures helps reduce these expenses. Thus, companies invest considerably in fault detection and diagnosis (FDD) systems to reduce unnecessary costs. Artificial Intelligence (AI) is getting increasingly used in the data driven signal based FDD industry because it requires less labor and equipment. It also results in higher productivity since it can operate continuously. This research offers Artificial Intelligence based solutions to detect and diagnose the degradation of three Internal Combustion Engine (ICE) parts which may cause on-duty failures: lead-acid accessory battery, spark plugs, and Exhaust Gas Recirculation (EGR) valve. Since the goal behind most FDD systems is cost
reduction, it is important to reduce the cost of the FDD test. Therefore, all the FDD solutions proposed in this research are based on three types of built-in sensors: battery voltage sensor, knock sensors and speed sensor. Furthermore, the engine database, the Machine Learning (ML) and Deep Learning (DL) models, and the virtual operating machines were all stored and operated in the cloud.
In this research, eight Machine Learning (ML) and Deep Learning (DL) models are proposed to detect degradations in three vehicle parts mentioned above. Additionally, novel advanced pre-processing approaches were designed to enhance the performance of the models. All the developed models showed excellent detection accuracies while classifying engine data obtained under artificially and physically induced fault conditions. Since some variant data samples could not be detected due to experimental flaws, defective sensors and changes in temperature and humidity, novel pre-processing methods were proposed for Long Short-Term Memory Networks (LSTM-RNN) and Convolutional Neural Networks (CNN) which solved the data variability problem and outperformed the previous ML/DL models. / Thesis / Doctor of Philosophy (PhD) / Predictive maintenance is extremely important to fleet owners. On-duty automobile engine failures add cost of extra towing, gas and labor expenses which can add up to millions of dollars every year. Early knowledge of upcoming failures helps reduce these expenses. Thus, companies invest considerably in fault detection and diagnosis (FDD) systems to reduce unnecessary costs. Artificial Intelligence (AI) is getting increasingly used in the data driven signal based FDD industry because it requires less labor and equipment. It also results in higher productivity since it can operate continuously. This research offers Artificial Intelligence based solutions to detect and diagnose the degradation of three Internal Combustion Engine (ICE) parts which may cause on-duty failures: lead-acid accessory battery, spark plugs, and Exhaust Gas Recirculation (EGR) valve. Since the goal behind most FDD systems is cost
reduction, it is important to reduce the cost of the FDD test. Therefore, all the FDD solutions proposed in this research are based on three types of built-in sensors: battery voltage sensor, knock sensors and speed sensor. Furthermore, the engine database, the Machine Learning (ML) and Deep Learning (DL) models, and the virtual operating machines were all stored and operated in the cloud.
In this research, eight Machine Learning (ML) and Deep Learning (DL) models are proposed to detect degradations in three vehicle parts mentioned above. Additionally, novel advanced pre-processing approaches were designed to enhance the performance of the models. All the developed models showed excellent detection accuracies while classifying engine data obtained under artificially and physically induced fault conditions. Since some variant data samples could not be detected due to experimental flaws, defective sensors and changes in temperature and humidity, novel pre-processing methods were proposed for Long Short-Term Memory Networks (LSTM-RNN) and Convolutional Neural Networks (CNN) which solved the data variability problem and outperformed the previous ML/DL models.
|
5 |
Assessment of processing techniques for Orthopaedic CompositesHedjazi, Ghazal January 2009 (has links)
Metallic implants have been used widely in a lot of orthopaedic applications. Titanium, Ceramics,medical grade titanium and other metal alloys are inserted in large bones designed as artificialjoints. Plates and bars are also attached to bones in order to facilitate healing of fracturedbones. The disadvantages of metal implants however are corrosion and the release of ions, sothere is need for finding new orthopaedic materials like composites, which have a closer densityto the natural bone too.This project is part of European project NEWBONE and it is based on the manufacturing andprocessing of glass fiber reinforced composite and the assessment of its properties.The goal of the project is to manufacture composite parts in the lab in different designs and dimensionswhich are suitable for mechanical and chemical tests.The theoretical work deals with the processing methods and the medical composites, medical devices,plastics, reinforcement of medical composites, PEEK and carbon fibers and other materials.The glass fibers are impregnated with dental curing resin. The residual void content, glass fibercontent and chemical and mechanical properties are estimated by ASTM standard methods.Results are resented according to evaluation of composite performance mechanically and chemicallyand show the best choice of composite parts in order to improve future use of orthopaedicapplications.
|
6 |
Ohmic heating as an alternative food processing technologyAnderson, Destinee R January 1900 (has links)
Master of Science / Food Science Institute / Fadi M. Aramouni / Ohmic heating for the food industry consists of using electrical energy to heat foods as a method of preservation, which can in turn be used for microbial inactivation or several other processes such as pasteurization, extraction, dehydration, blanching or thawing. Few studies have been conducted on the usefulness of this environmentally friendly processing technique. Due to the lack of sufficient information on research into ohmic heating for the food industry, a few of the published studies are discussed here in detail.
This report also focuses on self-conducted research using ohmic heating to determine its effect on Lactobacillus acidophilus inactivation versus conventional heating. Lactobacillus acidophilus was inoculated into MRS broth and incubated for 24 hours. The sample was then inoculated into sterile buffer at a dilution rate of 1:100. Samples of the diluted culture were subjected to either low voltage (18 V) or conventional heating (300°C) over a hotplate stirrer. Temperature was monitored on test and control samples to achieve an endpoint of 90°C. Samples were taken at regular intervals, plated onto MRS agar and incubated for 72 hours at 35°C to compare plate count expressed as colony forming units per milliliter (cfu/mL). Temperature was uniform throughout the ohmically heated sample and reached the endpoint more quickly than the conventionally heated sample, which also had cold spots. The total plate count at the end of the experiment was less for the ohmically heated sample versus the conventionally heated sample. Ohmic heating was more effective in inactivation of Lactobacillus acidophilus than conventional heating, most likely due to the more rapid and uniform heating of the sample, and possible electroporation of the cells.
|
7 |
Latin Vocabulary Acquisition : An Experiment Using Information-processing Techniques of Chunking and ImageryCarter, Terri Gay Manns 08 1900 (has links)
The purpose of the study was to determine the effect on student performance and attitude toward high school Latin by Latin I students when provided with vocabulary instruction through chunking and imagery.
|
8 |
Efficient Query Processing Over Web-Scale RDF DataAmgad M. Madkour (5930015) 17 January 2019 (has links)
The Semantic Web, or the Web of Data, promotes common data formats for representing structured data and their links over the web. RDF is the defacto standard for semantic data where it provides a flexible semi-structured model for describing concepts and relationships. RDF datasets consist of entries (i.e, triples) that range from thousands to Billions. The astronomical growth of RDF data calls for scalable RDF management and query processing strategies. This dissertation addresses efficient query processing over web-scale RDF data. The first contribution is WORQ, an online, workload-driven, RDF query processing technique. Based on the query workload, reduced sets of intermediate results (or reductions, for short) that are common for specific join pattern(s) are computed in an online fashion. Also, we introduce an efficient solution for RDF queries with unbound properties. The second contribution is SPARTI, a scalable technique for computing the reductions offline. SPARTI utilizes a partitioning schema, termed SemVP, that enables efficient management of the reductions. SPARTI uses a budgeting mechanism with a cost model to determine the worthiness of partitioning. The third contribution is KC, an efficient RDF data management system for the cloud. KC uses generalized filtering that encompasses both exact and approximate set membership structures that are used for filtering irrelevant data. KC defines a set of common operations and introduces an efficient method for managing and constructing filters. The final contribution is semantic filtering where data can be reduced based on the spatial, temporal, or ontological aspects of a query. We present a set of encoding techniques and demonstrate how to use semantic filters to reduce irrelevant data in a distributed setting.
|
9 |
Application of Array Processing Techniques to CDMA Multiuser Detection SystemsChang, Ann-Chen 11 May 2000 (has links)
Several issues on the problems of the adaptive array beamforming and code-division multiple access (CDMA) multiuser detection are investigated in this dissertation. Recently, based on the decomposition of observation vector space into two orthogonal eigenspace, the eigenspace-based (ESB) and the generalized eigenspace-based (GEIB) array signal processing techniques have been widely discussed due to their superior performance over conventional techniques. At first, the purpose of this dissertation is mainly to present robust and efficient algorithms for further enhancing the performance of ESB and GEIB techniques under imperfect and practical operation environments. We also propose a method of corrected steering angles to combat the supersensitivity of eigenanalysis interference canceler (EIC) to source number overestimation and steering angle errors.
We analyze the performance of several ESB multiuser detectors, including conventional direct-form detector and generalized sidelobe canceler (GSC) for synchronous CDMA system with and without desired user code mismatch. We also present a way of resolving spreading code mismatch in blind multiuser detection with subspace-based technique. Furthermore, the structure of GSC can be utilized to deal with the case of the desired user's SNR < 0 dB.
Next, algorithm for adaptive H¡Û filter has demonstrated the advantage of reduction of sensitivity to modeling error (due to finite tap number) and suitability for arbitrary ambient noise over recursive least squares (RLS) algorithm. However, the computational burden of the H¡Û algorithm is enormous. In order to reduce the computational complexity, subweight partition scheme is employed to an H¡Û-based algorithm. The computation burden of the conventional adaptive H¡Û algorithm can be mitigated with slight performance degradation. The H¡Û-based algorithm is then further extended to the adaptive beamformer and blind multiuser detector.
Finally, we present new diversity techniques for multiuser detection under multipath fading channels in asynchronous CDMA systems. The enhanced capacity of diversity for multipath channels can be achieved by appropriately utilizing the constraint matrix and the response vector in multiple constraint minimum variance (MCMV) algorithm. Moreover, the proposed techniques offer gratifying multiple access interference (MAI) suppression. We also incorporate the signal subspace-based projection into MCMV detector, so that the noise enhancement in the MCMV criterion can be reduced.
|
10 |
New techniques for vibration condition monitoring : Volterra kernel and Kolmogorov-SmirnovAndrade, Francisco Arruda Raposo January 1999 (has links)
This research presents a complete review of signal processing techniques used, today, in vibration based industrial condition monitoring and diagnostics. It also introduces two novel techniques to this field, namely: the Kolmogorov-Smirnov test and Volterra series, which have not yet been applied to vibration based condition monitoring. The first technique, the Kolmogorov-Smirnov test, relies on a statistical comparison of the cumulative probability distribution functions (CDF) from two time series. It must be emphasised that this is not a moment technique, and it uses the whole CDF, in the comparison process. The second tool suggested in this research is the Volterra series. This is a non-linear signal processing technique, which can be used to model a time series. The parameters of this model are used for condition monitoring applications. Finally, this work also presents a comprehensive comparative study between these new methods and the existing techniques. This study is based on results from numerical and experimental applications of each technique here discussed. The concluding remarks include suggestions on how the novel techniques proposed here can be improved.
|
Page generated in 0.0935 seconds