• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1153
  • 814
  • 181
  • 129
  • 114
  • 76
  • 39
  • 30
  • 26
  • 20
  • 18
  • 13
  • 10
  • 9
  • 9
  • Tagged with
  • 2982
  • 923
  • 345
  • 277
  • 269
  • 226
  • 154
  • 152
  • 139
  • 137
  • 133
  • 122
  • 120
  • 113
  • 106
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Laser Guided Automated Floor Profiling - FloorWalker

Whaley, Chad 16 June 2017 (has links)
No description available.
102

Lumbar Skin Profile Prediction from Anterior and Lateral Torso Measurements

Monat, Heath Barnhart 16 August 2012 (has links)
No description available.
103

Modelling forces in milling screw rotors

Wang, Xi 13 September 2022 (has links)
The deflections of screw rotors under machining forces cause mismatch between the male and female rotors and, consequently, accelerated wear and suboptimal efficiency in their performance. Optimizing the machining process to minimize the generated forces and accounting for the resulting mismatch in the design of the rotor profile requires accurately computing the machining forces in computer simulations. Virtual machining systems combine graphics-based computation of the Cutter-Workpiece Engagement (CWE) with the physics-based models of machining mechanics to simulate the forces during complex machining processes. However, because of the high computational load of graphical simulations, virtual machining is not suitable for the repetitive force simulations that are required for optimizing the design and manufacturing of rotors. In this work, we present a new method that simulates screw milling forces based on the process kinematics instead of graphical simulations. Utilizing mathematical equations that describe the process kinematics, the theoretical rotor profile is determined for feasible combinations of cutting tool profile, setup angle, and centre distance. Subsequently, to find the milling forces, the cutting edge is discretized into multiple small edge segments and a mechanistic cutting force model is used to determine the local cutting forces at each segment. After geometric and kinematic transformations of these local forces, the screw milling forces are obtained for each roughing and finishing pass. Instead of graphics-based methods, the engagement conditions between the cutter and workpiece are determined by the ensemble of 2D rotor and tool profiles; as a result, the computational efficiency is increased substantially. The semi-analytical nature of the presented method allows for computing the forces with arbitrary resolution within a reasonable time. The accuracy and efficiency of the presented method is verified by comparing the simulated forces against a dexel-based virtual machining system. / Graduate
104

The Development of Measurement and Characterization Techniques of Road Profiles

Kern, Joshua Victor 26 July 2007 (has links)
The principal excitation to a vehicle's chassis system is the road profile. Simulating a vehicle traversing long roads is impractical and a method to produce short roads with given characteristics must be developed. By understanding the characteristics of the road, a reduced set of models can be created from which appropriate representations of the terrain can be synthesized. Understanding the characteristics of the terrain requires the ability to accurately measure the terrain topology. It is only by increasing the fidelity and resolution of terrain topology data that application of these data can be advanced. The first part of this work presents the development of a high fidelity 3-D laser terrain measurement system. The system is developed for both on-highway and off-road measurement. It is capable of measuring terrain in three dimensions, whereas current systems measure separate 2-D profiles in each wheel path of the vehicle. The equipment setup and signal processing techniques are discussed, as well as future improvements and applications of this enabling technology. The second part of this work develops a method of characterizing non-stationary road profile data using ARIMA (Autoregressive Integrated Moving Average) modeling techniques. The first step is to consider the road to be a realization of an underlying stochastic process. The model identification techniques are demonstrated. Statistical techniques are developed and used to examine the distribution of the residual process and the results are demonstrated. The use of the ARIMA model parameters and residual distributions in classifying road profiles is also discussed. By classifying various road profiles according to given model parameters, any synthetic road realized from a given class of model parameters will represent all roads in that set, resulting in a timely and efficient simulation of a vehicle traversing any given type of road. / Master of Science
105

Road Profiler Performance Evaluation and Accuracy Criteria Analysis

Wang, Hao 06 October 2006 (has links)
Road smoothness is one of the most important road functional characteristics because it affects ride quality, operation cost, and vehicle dynamic load. There are many types of devices that measure the road profile, which is often used to compute different smoothness indices. The development of performance-based specifications and pavement warranties that use ride quality as a performance measure has increased the need for accurate measurement of pavement smoothness. For this reason, researchers have compared and evaluated the performance of available profilers and several profiler accuracy criteria have been proposed. However, there is not a definite answer on the ability of available profilers to accurately measure the actual road profile as well as the various smoothness indices. A recent profiler round-up compared the performance of 68 profilers on five test sections at Virginia Smart Road. The equipment evaluated included high-speed, light-weight, and walking-speed profilers, in addition to the reference device (rod and level). The test sites included two sites with traditional hot-mix asphalt (HMA) surfaces, one with a coarse-textured HMA surface, one on a continuously reinforced concrete pavement (CRCP), and one on a jointed plain concrete pavement (JCP). This investigation used a sample of the data collected during the experiment to compare the profiles and International Roughness Index (IRI) measured by each type of equipment with each other and with the reference. These comparisons allowed determination of the accuracy and repeatability capabilities of the existing equipment, evaluation of the appropriateness of various profiler accuracy criteria, and recommendations of usage criteria for different applications. The main conclusion of this investigation is that there are profilers available that can produce the level of accuracy (repeatability and bias) required for construction quality control and assurance. However, the analysis also showed that the accuracy varies significantly even with the same type of device. None of the inertial profilers evaluated met the current IRI bias standard requirements on all five test sites. On average, the profilers evaluated produced more accurate results on the conventional smooth pavement than on the coarse textured pavements. The cross-correlation method appears to have some advantages over the conventional point-to-point statistics method for comparing the measured profiles. On the sites investigated, good cross-correlation among the measured and reference profiles assured acceptable IRI accuracy. Finally, analysis based on Power Spectral Density and gain method showed that the profiler gain errors are nonuniformly distributed and that errors at different wavelengths have variable effects on the IRI bias. / Master of Science
106

Lead and Copper Contamination in Potable Water: Impacts of Redox Gradients, Water Age, Water Main Pipe Materials and Temperature

Masters, Sheldon 06 May 2015 (has links)
Potable water can become contaminated with lead and copper due to the corrosion of pipes, faucets, and fixtures. The US Environmental Protection Agency Lead and Copper Rule (LCR) is intended to target sampling at high-risk sites to help protect public health by minimizing lead and copper levels in drinking water. The LCR is currently under revision with a goal of better crafting sampling protocols to protect public health. This study examined an array of factors that determine the location and timing of "high-risk" in the context of sampling site selection and consumer health risks. This was done using field studies and well-controlled laboratory experiments. A pilot-scale simulated distribution system (SDS) was used to examine the complex relationship between disinfectant type (free chlorine and chloramine), water age (0-10.2 days), and pipe main material (PVC, cement, and iron). Redox gradients developed in the distribution system as controlled by water age and pipe material, which affected the microbiology and chemistry of the water delivered to consumer homes. Free chlorine disinfectant was the most stable in the presence of PVC while chloramine was most stable in the presence of cement. At shorter water ages where disinfectant residuals were present, chlorine tended to cause as much as 4 times more iron corrosion when compared to chloramine. However, the worst localized attack on iron materials occurred at high water age in the system with chloramine. It was hypothesized that this was due to denitrification-a phenomenon relatively unexplored in drinking water distribution systems and documented in this study. Cumulative chemical and biological changes, such as those documented in the study described above, can create "high-risk" hotspots for elevated lead and copper, with associated concerns for consumer exposure and regulatory monitoring. In both laboratory and field studies, trends in lead and copper release were site-specific and ultimately determined by the plumbing material, microbiology and chemistry. In many cases, elevated levels of lead and copper did not co-occur suggesting that, in a revised LCR, these contaminants will have to be sampled separately in order to identify worst case conditions. Temperature was also examined as a potentially important factor in lead and copper corrosion. Several studies have attributed higher incidence of childhood lead poisoning during the summer to increased soil and dust exposure; however, drinking water may also be a significant contributing factor. In large-scale pipe rigs, total and dissolved lead release was 3-5 times higher during the summer compared to the winter. However, in bench scale studies, higher temperature could increase, decrease, or have no effect on lead release dependent on material and water chemistry. Similarly, in a distribution system served by a centralized treatment plant, lead release from pure lead service lines increased with temperature in some homes but had no correlation in other homes. It is possible that changes throughout the distribution system such as disinfectant residual, iron, or other factors can create scales on pipes at individual homes, which determines the temperature dependency of lead release. Consumer exposure to lead can also be adversely influenced by the presence of particulate iron. In the case of Providence, RI, a well-intentioned decrease in the finished water pH from 10.3 to 9.7, resulted in an epidemic of red water complaints due to the corrosion of iron mains and a concomitant increase in water lead levels. Complementary bench scale and field studies demonstrated that higher iron in water is sometimes linked to higher lead in water, due to sorption of lead onto the iron particulates. Finally, one of the most significant emerging challenges associated with evaluating corrosion control and consumer exposure, is the variability in lead and copper during sampling due to semi-random detachment of lead particles to water, which can pose an acute health concern. Well-controlled test rigs were used to characterize the variability in lead and copper release and compared to consumer sampling during the LCR. The variability due to semi-random particulate detachment, is equal to the typical variability observed in LCR sampling, suggesting that this inherent variability is much more important than other common sources including customer error, customer failure to follow sampling instructions or long stagnation times. While instructing consumers to collect samples are low flow rates reduces variability, it will fail to detect elevated lead from many hazardous taps. Moreover, collecting a single sample to characterize health risks from a given tap, are not adequately protective to consumers in homes with lead plumbing, in an era when corrosion control has reduced the presence of soluble lead in water. Future EPA monitoring and public education should be changed to address this concern. / Ph. D.
107

Investigation of Aerodynamic Profile Losses for a Low-Reaction Steam Turbine Blade

Guilliams, Hunter Benjamin 27 January 2014 (has links)
This thesis presents the results of a linear cascade experiment performed on the mean-line and near-tip sections of a low-reaction steam turbine blade and compares them to CFD of the former. The purpose of these tests was the refinement of a proprietary empirical profile loss model. A review of the literature shows that experimental data on this type of blade is not openly available. The continued efficacy of empirical loss models to low-reaction steam turbine blades requires data from experiments such as the present study. Tests covered a range of incidence from -6 to +4 and exit Mach numbers from 0.4 to 0.6. Extensive static pressure taps on the blades allowed detailed examinations of blade loading. This loading was dissimilar to steam turbine blade loading in the open literature. A traversing five-hole probe measured conditions downstream of the blade row to enable the calculation of a total pressure loss coefficient. The area-averaged total pressure loss coefficient for both profiles was near 0.08 and was not sensitive to incidence or exit Mach number over the ranges tested. / Master of Science
108

Evaluation of the Cycle Profile Effect on the Degradation of Commercial Lithium Ion Batteries

Radhakrishnan, Karthik Narayanan 14 September 2017 (has links)
Major vehicle manufacturers are committed to expand their electrified vehicle fleet in upcoming years to meet fuel efficiency goals. Understanding the effect of the charge/discharge cycle profiles on battery durability is important to the implementation of batteries in electrified vehicles and to the design of appropriate battery testing protocols. In this work, commercial high-power prismatic lithium ion cells were cycled using a pulse-heavy profile and a simple square-wave profile to investigate the effect of cycle profile on the capacity fade of the battery. The pulse-heavy profile was designed to simulate on-road conditions for a typical hybrid electric vehicle, while the simplified square-wave profile was designed to have the same charge throughput as the pulse-heavy profile, but with lower peak currents. The batteries were cycled until each battery achieved a combined throughput of 100 kAh. Reference Performance Tests were conducted periodically to monitor the state of the batteries through the course of the testing. The results indicate that, for the batteries tested, the capacity fade for the two profiles was very similar and was 11 % ± 0.5 % compared to beginning of life. The change in internal resistance of the batteries over the course of the testing was also monitored and found to increase 21% and 12% compared to beginning of life for the pulse-heavy and square-wave profiles respectively. Cycling tests on coin cells with similar electrode chemistries as well as development of a first principles, physics based model were done in order to understand the underlying cause of the observed degradation. The results from the coin cells and the model suggest that the loss of active material in the electrodes due to the charge transfer process is the primary cause of degradation while the loss of cyclable lithium due to side reactions plays a secondary role. These results also indicate that for high power cells, the capacity degradation associated with the charge-sustaining mode of operation can be studied with relatively simple approximations of complex drive cycles. / Ph. D. / Major vehicle manufacturers are committed to expand their electrified vehicle fleet in upcoming years to meet fuel efficiency goals. Understanding the effect of the charge/discharge cycle profiles on battery durability is important to the implementation of batteries in electrified vehicles and to the design of appropriate battery testing protocols. In this work, commercial lithium ion cells were tested using two profiles with the same energy transfer; a pulse-heavy profile to simulate on-road conditions for a typical hybrid electric vehicle, and a simplified square-wave profile with the same charge flow as the pulse-heavy profile, but with lower currents. Cycling tests on coin cells with similar electrode chemistries as well as development of a first principles, physics based model were done in order to understand the underlying cause of the degradation. The results suggest that the degradation observed is not dependent on the type of profile used. These results also indicate that for high power cells, the capacity degradation associated with the charge-sustaining mode of operation can be studied with relatively simple approximations of complex drive cycles.
109

Anomaly Detection for Smart Infrastructure: An Unsupervised Approach for Time Series Comparison

Gandra, Harshitha 25 January 2022 (has links)
Time series anomaly detection can prove to be a very useful tool to inspect and maintain the health and quality of an infrastructure system. While tackling such a problem, the main concern lies in the imbalanced nature of the dataset. In order to mitigate this problem, this thesis proposes two unsupervised anomaly detection frameworks. The first one is an architecture which leverages the concept of matrix profile which essentially refers to a data structure containing the euclidean scores of the subsequences of two time series that is obtained through a similarity join.It is an architecture comprising of a data fusion technique coupled with using matrix profile analysis under the constraints of varied sampling rate for different time series. To this end, we have proposed a framework, through which a time series that is being evaluated for anomalies is quantitatively compared with a benchmark (anomaly-free) time series using the proposed asynchronous time series comparison that was inspired by matrix profile approach for anomaly detection on time series . In order to evaluate the efficacy of this framework, it was tested on a case study comprising of a Class I Rail road dataset. The data collection system integrated into this railway system collects data through different data acquisition channels which represent different transducers. This framework was applied to all the channels and the best performing channels were identified. The average Recall and Precision achieved on the single channel evaluation through this framework was 93.5% and 55% respectively with an error threshold of 0.04 miles or 211 feet. A limitation that was noticed in this framework was that there were some false positive predictions. In order to overcome this problem, a second framework has been proposed which incorporates the idea of extracting signature patterns in a time series also known as motifs which can be leveraged to identify anomalous patterns. This second framework proposed is a motif based framework which operates under the same constraints of a varied sampling rate. Here, a feature extraction method and a clustering method was used in the training process of a One Class Support Vector Machine (OCSVM) coupled with a Kernel Density Estimation (KDE) technique. The average Recall and Precision achieved on the same case study through this frame work was 74% and 57%. In comparison to the first, the second framework does not perform as well. There will be future efforts focused on improving this classification-based anomaly detection method / Master of Science / Time series anomaly detection refers to the identification of any outliers or deviations present in a time series data. This technique could prove to be useful to mitigate any unplanned events by facilitating early maintenance. The first method proposed involves comparing an anomaly-free dataset with the time series of interest. The difference between these two time series are noted and the point with the highest difference will be considered to be an anomaly. The performance of this model was evaluated on a Rail road dataset and the cumuluative average Recall (how useful the predictions are) and average Precison (how accurate the predictions are) 93.5% and 55% respectively with an acceptable error range of 0.04 miles or 211 feet. The second method proposed involves extracting all segments in the anomaly-free dataset and grouping them according to their similarity. Here, a OCSVM is used to train these individual groups. OCSVM is a machine learning algorithm which learns to classify a data as either anomalous or normal. It is then coupled with the KDE which creates a distribution across all the anomalies and identifies the anomaly as one with a high distribution of predictions.The performance of this model was evaluated on a Rail road dataset and the cumulative average Recall and cumulative average Precision 74% and 57% respectively with an acceptable error range of 0.04 miles or 211 feet.
110

Quality engineering applications on single and multiple nonlinear profiles

Chou, Shih-Hsiung January 1900 (has links)
Doctor of Philosophy / Department of Industrial and Manufacturing Systems Engineering / Shing I. Chang / Profile analysis has drawn attention in quality engineering applications due to the growing use of sensors and information technologies. Unlike the conventional quality characteristics of interest, a profile is formed functionally dependent on one or more explanatory variables. A single profile may contain hundred or thousand data points. The conventional charting tools cannot handle such high dimensional datasets. In this dissertation, six unsolved issues are investigated. First, Chang and Yadama’s method (2010) shows competitive results in nonlinear profile monitoring. However, the effectiveness of removing noise from given nonlinear profile by using B-splines fitting with and without wavelet transformation is unclear. Second, many researches dealt with profile analysis problem considering whether profile shape change only or variance change only. Those methods cannot identify whether the process is out-of-control due to mean or variance shift. Third, methods dealing with detecting profile shape change always assume that a gold standard profile exists. The existing profile shape change detecting methods are hard to be implemented directly. Fourth, multiple nonlinear profiles situation may exist in real world applications, so that conventional single profile analysis methods may result in high false alarm rate when dealing multiple profile scenario. Fifth, Multiple nonlinear profiles situation may be also happened in designs of experiment. In a conventional experimental design, the response variable is usually considered a single value or a vector. The conventional approach cannot deal with when the format of the response factor as multiple nonlinear profiles. Finally, profile fault diagnosis is an important step after detecting out-of-control signal. However, current approaches will lead to large number of combinations if the number of sections is too large. The organization of this dissertation is as following. Chapter 1 introduce the profile analysis, current solutions, and challenges; Chapter 2 to Chapter 4 explore the unsolved challenges in single profile analysis; Chapter 5 and Chapter 6 investigate multiple profiles issues in profile monitoring analysis and experimental design method. Chapter 7 proposed a novel high-dimensional diagnosis control chart to diagnose the cause of out-of-control signal via visualization aid. Finally, Chapter 8 summarizes the achievements and contributions of this research.

Page generated in 0.0734 seconds