• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2186
  • 383
  • 258
  • 136
  • 75
  • 62
  • 52
  • 31
  • 21
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • Tagged with
  • 4069
  • 4069
  • 970
  • 766
  • 694
  • 670
  • 627
  • 438
  • 403
  • 378
  • 363
  • 331
  • 300
  • 255
  • 253
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Estimation of synchronization parameters

van de Beek, Jaap January 1996 (has links)
This thesis deals with the estimation of synchronization parameters in {Orthogonal Frequency Division Multiplexing} (OFDM) communication systems and in active ultrasonic measuring systems. Estimation methods for the timing and frequency offset and for the attenuation taps of the frequency selective channel are presented and investigated.In OFDM communication systems the estimation of the timing offset of the transmitted data frame is one important parameter. This offset provides the receiver with a means of synchronizing its sampling clock to that of the transmitter. A second important parameter is the offset in the carrier frequency used by the receiver to demodulate the received signal.For OFDM systems using a cyclic prefix, the joint {Maximum Likelihood} (ML) estimation of the timing and carrier frequency offset is introduced. The redundancy introduced by the prefix is exploited optimally. This novel method is derived for a non-dispersive channel. Its performance, however, is also evaluated for a frequency-selective Rayleigh-fading radio channel. Time dispersion causes an irreducible error floor in this estimator's performance. This error floor is the limiting factor for the applicability of the timing estimator. Depending on the requirements, it may be used in either an acquisition or a tracking mode. For the frequency estimator the error floor is low enough to allow for stable frequency tracking.A low-complex variant of the timing offset estimator is presented allowing a simple implementation. This is the ML estimator, given a 2-bit representation of the received signal as the sufficient statistics. Its performance is evaluated for a frequency-selective Rayleigh-fading radio channel and for a twisted-pair copper channel. Simulations show this estimator to have a similar error floor as the full resolution ML estimator.The problem of estimating the propagation time of a signal is also of interest in active pulse echo systems, such as are used in, {\it e.g.}, radar, medical imaging, and geophysics. The {Minimum Mean Squared Error} (MMSE) estimator of arrival time is derived and investigated for an active airborne ultrasound measurement system. Besides performing better than the conventional {\it Maximum a Posteriori} (MAP) estimator, this method can be used to develop different estimators in situations where the system Signal to Noise Ratio (SNR) is unknown.Coherent multi-amplitude OFDM receivers generally need to compensate for a frequency selective channel in order to detect transmitted data symbols reliably. For this purpose, a channel equalizer needs to be fed estimates of the subchannel attenuations.The linear MMSE estimator of these attenuations is presented. Of all linear estimators, this estimator optimally makes use of the frequency correlation between the subchannel attenuations. Low-complex modified estimators are proposed and investigated. The proposed modifications cause an irreducible error floor for this estimator's performance, but simulations show that for SNR values up to 20~dB, the improvement of a modified estimator compared to the Least Squares (LS) estimator is at least 3~dB. / <p>Godkänd; 1996; 20080328 (ysko)</p>
292

Connecting Process Variables to Product Properties in Papermaking: A Multivariate Approach

Håkansson, Mikael January 2014 (has links)
In paper-making there are numerous of parameters that control the final outcome of the paper. This thesis examines the connections between paper properties and influential factors in the manufacturing process, by looking at the entire fiber line from the incoming wood chips to the actual finished paper. The analysis is done by studying how important process variables connect to the properties of the paper, and also by investigating the possibilities of modeling how these affect the final product.There are numerous factors that affect the final outcome of a manufactured paper. Five of these; wood type, cooking time, refining energy input, amount of starch and roll pressure in the paper machine, have been investigated in a series of laboratory experiments. A factorial designed experiment was set up to investigate the mentioned factors impact on paper properties. Focus in the study was laid on two aspects. One was to investigate interaction effects among the process variables and the significance of theseinteraction effects as well as the main effects. For the second part it was possible to utilize these interaction effects and deduce which combinations of factor levels that could result in equal output levels of certain paper parameters.Being able to predict the paper quality as accurately as possible is another importantaspect in paper-making. In the second study in the thesis the relation between the paper properties and process variations are charted. Through different multivariate methods prediction models were created based on the data gathered in the designed experiments. The underlying correlation structures in the data could be used in conjunction with the design factors to derive models that connected process parameters to paper properties. With the help of these models it is possible to predict what paper property levels toexpect when altering process variables.
293

Quantitative image analysis : a focus on automated characterization of structures in optical microscopy of iron ore pellets

Nellros, Frida January 2013 (has links)
Sintering occurs in many types of material such as iron, ceramics and snow, typically during thermal treatment, and aects the material properties, particularly the strength, by the bonding of particles into a coherent structure. In order to improve the mechanical strength in magnetite iron ore pellets it is important to be able to characterize and quantitatively measure the degree of sintering and features that impact the process of sintering.The aim for this licentiate thesis has been to create tools for sintering characterization through automated image analysis of optical microscopy images. Such tools are of interest since they provide a comparable quantication of pellet properties that can be related to other parameters, giving a historical record that is digital, objective and not dependent on the eyes of a trained expert. In this work, two dierent studies of the microstructure in indurated (heat hardened) pellets have been performed. The methods presented in these studies have been shown suitable for characterizing sintering properties in iron ore pellets, and possibly also other materials that experience sintering phenomena.The first study presents research to automate image capture and analysis of entire crosssections of indurated iron ore pellets to characterize proportions of magnetite, hematite, and other components. Spatial distributions of the mentioned phases are produced for each pellet, graphing proportions in relation to the distance to the pellet surface. The results are not directly comparable to a chemical analysis but comparisons with manual segmentation of images validates the method. Dierent types of pellets have been tested and the system has produced robust results for varying cases.The second study focuses on the analysis of the particle joins and structure. The joins between particles have been identied with a method based mainly on morphological image processing and features have been calculated based on the geometric properties and curvature of these joins. The features have been analyzed and been determined to hold discriminative power by displaying properties consistent with sintering theory and results from traditional physical dilation measurements on the heated samples.A note of caution for quantitative studies of iron ore pellet has been identied in this thesis. Especially for green pellets, the microscopy sample preparation prohibit any statistical inference studies due to particle rip-out during polishing. Researchers performing qualitative microscopy studies are generally aware of the phenomenon of rip-outs, but the extent of how even seemingly good samples are aected has not been unveiled until attempting extensive quantitative analysis of features such as green pellet porosity during the course of this work.
294

Adaptive tensor-based morphological filtering and analysis of 3D profile data

Landström, Anders January 2012 (has links)
Image analysis methods for processing 3D profile data have been investigated and developed. These methods include; Image reconstruction by prioritized incremental normalized convolution, morphology-based crack detection for steel slabs, and adaptive morphology based on the local structure tensor. The methods have been applied to a number of industrial applications.An issue with 3D profile data captured by laser triangulation is occlusion, which occurs when the line-of-sight between the projected laser light and the camera sensor is obstructed. To overcome this problem, interpolation of missing surface in rock piles has been investigated and a novel interpolation method for filling in missing pixel values iteratively from the edges of the reliable data, using normalized convolution, has been developed.3D profile data of the steel surface has been used to detect longitudinal cracks in casted steel slabs. Segmentation of the data is done using mathematical morphology, and the resulting connected regions are assigned a crack probability estimate based on a statistic logistic regression model. More specifically, the morphological filtering locates trenches in the data, excludes scale regions for further analysis, and finally links crack segments together in order to obtain a segmented region which receives a crack probability based on its depth and length.Also suggested is a novel method for adaptive mathematical morphology intended to improve crack segment linking, i.e. for bridging gaps in the crack signature in order to increase the length of potential crack segments. Standard morphology operations rely on a predefined structuring element which is repeatedly used for each pixel in the image. The outline of a crack, however, can range from a straight line to a zig-zag pattern. A more adaptive method for linking regions with a large enough estimated crack depth would therefore be beneficial. More advanced morphological approaches, such as morphological amoebas and path openings, adapt better to curvature in the image. For our purpose, however, we investigate how the local structure tensor can be used to adaptively assign to each pixel an elliptical structuring element based on the local orientation within the image. The information from the local structure tensor directly defines the shape of the elliptical structuring element, and the resulting morphological filtering successfully enhances crack signatures in the data.
295

Blind Enhancement of Harmonically Related Signals by Maximizing Skewness

Ovacikli, Kubilay January 2014 (has links)
Rolling element bearings are used in rotating machinery in various industry branches. Their health status must be monitored continuously in order to establish proper operational conditions in a production process. Numerous approaches, which can be investigated under the subject of ``Condition Based Maintenance", have been studied within mechanical engineering and signal processing to be able to detect and classify possible faults on rolling bearings.Periodic impulsive signals can emerge from defected bearings within rotating machinery. As the signal is distorted by an unknown transfer function, noise and severe interference, the challenge becomes to reduce these effects as much as possible to extract valuable and reliable information about the rolling bearings' health status. Without any observation of the source signal, a scale-invariant higher order moment, skewness, can be used as a tool to characterize statistical properties to enhance the desired signal. It is the impulsiveness, thus asymmetry of the signal that will be promoted. To assess the performance of skewness, a signal model that consists of harmonically related sinusoids representing an impulsive source is built. Depending on such a model, surface characteristics of skewness are investigated. In relation to harmonic content, the ability of skewness in discovering such harmonic relation is studied. It has been observed that the optimization process converges to a setting where all harmonics are preserved, while any component that does not possess such a harmonic relation is suppressed. In the case of multiple mutually inharmonic source signals with harmonic support, it is shown that skewness maximization results in a setting where only the harmonic set with highest skewness remains. Finally, experimental examples are provided to support theoretical findings.
296

Non-destructive assessment of additively manufactured objects using ultrasound

Zia, Shafaq January 2024 (has links)
Additive manufacturing (AM) enables the manufacturing of complex and tailored products for an unlimited number of applications such as aerospace, healthcare, etc. The technology has received a lot of attention in lightweight applications where it is associated with new design possibilities but also reduced material costs, material waste, and energy consumption. The use of ultrasound has the potential to become the material characterization method used for AM since it is quick, safe, and scales well with component size. Ultrasound data, coupled with supervised learning techniques, serves as a powerful tool for the non-destructive evaluation of different materials, such as metals. This research focuses on understanding the additive manufacturing process, the resulting material properties, and the variation captured using ultrasound due to the manufacturing parameters. The case study included in this thesis is the examination of 316L steel cubes manufactured using laser powder bed fusion. This study includes the estimation and prediction of manufacturing parameters using supervised learning, the assessment of the influence of the manufacturing parameters on the variability within samples, and the quantitative quality assessment of the samples based on the material properties that are a result of the changes in manufacturing parameters. The research is vital for analyzing the homogeneity of microstructures, advancement in online process control, and ensuring the quality of additively manufactured products. This study contributes to valuable insights into the relationship between manufacturing parameters, material properties, and ultrasound signatures. There is a significant variation captured using ultrasound within the samples and between samples that shows the backscattered signal is sensitive to the microstructure that is a result of the manufacturing parameters. Since the material properties change with the change in manufacturing parameters, the quality of a sample can be described by the relation between the material properties and backscattered ultrasound signals. The thesis is divided into two parts. The first part focuses on the introduction of the study, a summary of the contributions, and future work. The second part contains a collection of papers describing the research in detail.
297

Automatisk volymmätning av virkestravar på lastbil / Automatic measurment of volume for on-truck timber stacks

Lindberg, Pontus January 2016 (has links)
Automatisk travmätning är ett mätsystem som mäter vedvolymen på virkeslastbilar. Systemet består av sex stycken sensor-system. Varje sensor kalibreras först individuellt och sedan ihop för att ge ett sammanfogat världskoordinat system. Varje sensor genererar en djupbild och en reflektansbild, där värdena i djupbilden representerar avståndet från kameran. Uppdragsgivaren har utvecklat en algoritm som utifrån mätdatat(bilderna)  uppskattar vedvolymen till en viss noggrannhet som uppfyller kraven ställda av skogsindustrin för automatisk mätning av travar på virkeslastbil. I den här rapporten undersöks om bättre mätresultat kan uppnås exempelvis med andra metoder eller kombinationer av dem.Till förfogande finns ca 125 dataset av travar där facit finns. Facit består av manuella stickprovsmätningar där varje enskild stock mätts för sig. Initialt valdes aktivt att inte sätta sig in i uppdragsgivarens algoritm för att inte bli färgad av hur de kommit fram till sina resultat. Främst används fram- och baksidebilderna av entrave för att hitta stockarna. Därefter interpoleras de funna stockarna in till mitten av traven eller så paras stockarna ihop från de båda sidorna. Ibland finns vissa problem med bilderna. Oftast är minst en av sidorna ockluderade av lastbilshytten, kranen eller en annan trave. Då gäller det att hitta uppskattning utifrån det data man ser för fylla upp de skymda områdena.I början av examensarbetet användes två metoder(MSER och Punktplanmetoden) för undersöka om man kunde uppnå bra resultat utifrån att enbart mäta datat och användadet som initial gissning till volymen. Dock upptäcktes det att värdefulla detaljer   i dataseten missades för att mer noggrant bestämma vedvolymen. Exempel på sådan data är fördelningen av diametern på de funna stockändarna. Tillika tenderades kraftig överestimering när travarna innehöll en viss mängd ris och eller dåligt kvistade stockar. Därefter konstruerades en geometrisk metod, och det var den här metoden som det lades mest tid på.I figurerna nedan visas en tabell och en graf där alla tre metoders resultat under bark(UB) visas och intervall gränserna för att uppfylla kraven ställda av skogsindustrin. / Automatic measurment of volume for on-truck timber stacks is a measurment system that measures the woodvolume. The whole system consist of six sensorsystems. Each sensor is at first individually calibrated and then calibrated together to give a merged worldcoordinate system. Each sensor produces a depth image and are flectance image where the values in the depth image represents the distance from the sensor. The consistuent has developed a algoritm which from the measured data estimates the woodvolume to a certain accuracy that fullfills the requirements stated from the forest industry for automatic measurent of timber stacks on trucks. I this thesis we will analyze if better measurement results can be achieved with other methods or combinations of them. At disposal we have about 125 datasets of timber stacks where the key exists. The key consists of random samples that have been manually measured log by log. A active choice to not take part of the consistuent´s algoritm was made to not be coloured of how they acheived their results. Mainly the front- and backside images of at imberstack is used to find the logs. Subsequentlythefound logs are interpolated into the middle of the timber stack or are paired together. Sometimes there exists some problems with the images. Frequently at least on the sides are occluded by the cabin, crane or another timber stack. Then estimation is required from the visible data to make a qualified guess of the occluded areas. In the beginning of this thesis two methods where analysed, the MSER features method and the pointplane method. The main purpose of those two methods was to analyse if a good results could be acheived by just measuring the data and using it as a initial volume estimate. Though it was discovered that valuable details was missed to be able to more accuratly determine the woodvolume. Examples of such details is the distribution of diameter on the found logs. Also these methods tended to heavily over- or underestimate when the timber stack had a certain amount of brushwood and or badly limbed logs. After that a geometric method was constructed, and it was this method that most time and analysis was spent on. In the figures below a table and a graph where the under bark results of all three methods and the interval limits of forest industry requirements.
298

Platform-Agnostic Resilient Decentralized Multi-Sensor Fusion for Pose Estimation

Mukherjee, Moumita January 2024 (has links)
This thesis presents an innovative decentralised sensor fusion framework with significant potential to improve navigation accuracy in autonomous vehicles. Its applicability is especially noteworthy in demanding scenarios, such as adverse weather conditions and intricate urban environments. In general, sensor fusion is a crucial method for integrating signals from various sources, extracting and integrating information from multiple inputs into a unified signal or data set. Frequently, sources of information are from sensors or devices designed for the perception and measurement of dynamic environmental changes. The collected data from diverse sensors undergoes processing through specialised algorithms, commonly referred to as "sensor fusion" or "data fusion" algorithms. This thesis describes sensor fusion's significance in processing data from multiple sources. It highlights the classification of fusion algorithms, demonstrating the versatility and applicability of sensor fusion across a range of redundant sensors. Moreover, various creative strategies for sensor fusion, including fault detection and isolation and methods for addressing non-Gaussian noise through smoothing filter techniques, are collectively introduced as part of a comprehensive navigation framework. The contributions of this thesis are summarized in the following. First, it introduces a decentralised two-layered fusion architecture for pose estimation, emphasising fault resilience. In a decentralised fashion, it utilises distributed nodes equipped with extended Kalman filters in the initial tier and optimal information filters in the subsequent tier to amalgamate pose data from multiple sensors. The design is named the Fault-Resilient Optimal Information Fusion (FR-OIF) architecture in this thesis, which guarantees reliable pose estimation, even in cases of sensor malfunctions. Secondly, this work proposes an Auto-encoder-based fault detection framework for a multi-sensorial distributed pose estimation. In this framework, auto-encoders are applied to detect anomalies in the raw signal measurements. At the same time, a fault-resilient optimal information filter (FROIF) approach is incorporated with the auto-encoder-based detection to improve estimation accuracy. The effectiveness of these methods is demonstrated through experimental results involving a micro aerial vehicle and is compared to a novel classical detection approach based on the Extended Kalman filter. Furthermore, it introduces an integrated multi-sensor fusion architecture enhanced by centralised Auto-encoder technology and an EKF framework. This approach effectively removes sensor data noise and anomalies, ensuring reliable data reconstruction, even when faced with time-dependent anomalies. The assessment of the framework's performance using actual sensor data collected from the onboard sensors of a micro aerial vehicle demonstrates its superiority compared to a centralised Extended Kalman filter without Auto-encoders. The next part of the thesis discusses the increasing need for resilient autonomy in complex space missions. It emphasises the challenges posed by interactions with non-cooperative objects and extreme environments, calling for advanced autonomy solutions.  Furthermore, this work introduces a decentralised multi-sensor fusion architecture for resilient satellite navigation around asteroids. It addresses challenges such as dynamic illumination, sensor drift, and momentary sensor failure. The approach includes fault detection and isolation methods, ensuring autonomous operation in adverse conditions. Finally, the last part of the thesis focuses on accurate localisation and deviation identification in multi-sensor fusion with Millimeter-Wave Radars. It presents a flexible, decentralised smoothing filter framework that effectively handles unwanted measurements and enhances Ego velocity estimation accuracy.  Overall, this thesis plays a significant role in advancing the field of decentralised sensor fusion, encompassing anomaly avoidance mechanisms, fault detection and isolation frameworks, and robust navigation algorithms applicable across a range of domains, covering everything from robotics to space exploration. In the initial section of this thesis, we delve into the backdrop, reasons behind the research, existing challenges, and the contributions made. Conversely, the subsequent section comprises the complete articles linked to the outlined contributions and a bibliography.
299

Signal processing for sensor arrays

Soykan, Orhan January 1990 (has links)
No description available.
300

Predictive Maintenance of Servo Guns

Ifver, Joakim January 2022 (has links)
This bachelor thesis investigates the possibility of implementing a system to detect defects with extractable data from ABB’s servo guns. By looking at the deflection data i.e., the electrode position at every stroke, multiple methods are applied to investigate the behavior of the data. The distribution of the data that was first investigated led the thesis straight to one of the traditional methods in Statistical Process Control (SPC) called Control Chart. After investigating the data’s frequency domain, pattern and applying unsupervised clustering, several experiments were performed. The first experiment was to see how the deflection changed due to a change in force applied between the electrode tips. Second, artificial defects was created by switching electrode tips of different lengths and change the set force between the electrode tips. The idea behind the experiments was to evaluate the performance of a control chart whereas the lower and upper control levels was based on a regression curve that was implemented by a set force for a specific servo gun. The method proved to work for the tests that were performed, but as the results suggest it needs more research and development to be suitable for implementation in ABB’s software. / Detta examensarbete undersöker möjligheten att implementera ett system som detekterar defekter m.h.a. böjningsdata från ABB’s svetstänger. Genom att titta på datan, dvs., elektrodernas position vid varje slag/svets, används flertal metoder för att undersöka processens beteende. Datans distribution visade en god passform till en normalfördelning, vilket ledde projektet till en traditionell metod inom Statistical Process Control (SPC) sk. Control Chart. Efter att man undersökt datans frekvensdomän, om det fanns ett mönster att avläsa och tillämpat Unsupervised clustering, genomfördes flera olika experiment. Det första experimentet undersökte hur böjningen i tångarmarna förändrades genom att applicera olika krafter mellan elektroderna. Dessutom genererades data för att försöka skapa konstgjorda sprickor genom att byta ut elektroder av olika längd. I tillägg skapades drivande data (förändringar i böjning med tiden) genom att applicera små förändringar i kraft mellan elektroderna i ett visst intervall. Idén bakom experimenten var att utvärdera prestationen av en Control Chart, där övre och undre kontrollparametrar bestämdes utifrån en regressionsfunktion som var implementerad m.h.a. parametrar från resultatet utav förhållandet mellan kraft och böjning. Experimenten utfördes på en utav ABB’s svettänger s.k. GWT X9. Metoden visade sig fungera för testen som blev utförda, men resultatet föreslår att arbetet kräver mer foskning och arbete för att bli implementerad i ABB’s mjukvara.

Page generated in 0.1108 seconds