• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 878
  • 201
  • 126
  • 110
  • 73
  • 25
  • 17
  • 16
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 1729
  • 412
  • 311
  • 245
  • 228
  • 184
  • 174
  • 167
  • 166
  • 156
  • 155
  • 152
  • 152
  • 150
  • 141
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
651

Optimisation of a Diagnostic Test for a Truck Engine / Optimering av ett diagnostest för en lastbilsmotor

Haraldsson, Petter January 2002 (has links)
<p>Diagnostic systems become more and more an important within the field of vehicle systems. This is much because new rules and regulation forcing the manufacturer of heavy duty trucks to survey the emission process in its engines during the whole lifetime of the truck. To do this a diagnostic system has to be implemented which always survey the process and check that the thresholds of the emissions set by the government not are exceeded. There is also a demand that this system should be reliable, i.e. not producing false alarms or missed detection. One way of producing such a system is to use model based diagnosis system where thresholds has to be set deciding if the system is corrupt or not. There is a lot of difficulties involved in this. Firstly, there is no way of knowing if the signals logged are corrupt or not. This is because faults in these signals should be detected. Secondly, because of strict demand of reliability the thresholds has to be set where there is very low probability of finding values while driving. In this thesis a methodology is proposed for setting thresholds in a diagnosis system in an experimental test engine at Scania. Measurement data has been logged over 20 hours of effective driving by two individuals of the same engine. It is shown that the result is improved significantly by using this method and the threshold can be set so smaller faults in the system reliably can be detected.</p>
652

On Galerkin Approximations for the Zakai Equation with Diffusive and Point Process Observations

Xu, Ling 16 February 2011 (has links) (PDF)
We are interested in a nonlinear filtering problem motivated by an information-based approach for modelling the dynamic evolution of a portfolio of credit risky securities. We solve this problem by `change of measure method\\\' and show the existence of the density of the unnormalized conditional distribution which is a solution to the Zakai equation. Zakai equation is a linear SPDE which, in general, cannot be solved analytically. We apply Galerkin method to solve it numerically and show the convergence of Galerkin approximation in mean square. Lastly, we design an adaptive Galerkin filter with a basis of Hermite polynomials and we present numerical examples to illustrate the effectiveness of the proposed method. The work is closely related to the paper Frey and Schmidt (2010).
653

Segmentation en imagerie échocardiographique par ensembles de niveaux paramétriques évoluant à partir des statistiques du signal radiofréquence gmentation in echocardiographic imaging using parametric level set model driving by the statistics of the radiofrequency signal /

Bernard, Olivier Friboulet, Denis. January 2007 (has links)
Thèse doctorat : Images & Systèmes : Villeurbanne, INSA : 2006. / Thèse rédigée en anglais. Titre provenant de l'écran-titre. Bibliogr. p. 177-189.
654

User-based filter utilization for multicarrier schemes

Ankarali, Zekeriyya Esat 01 January 2013 (has links)
Multicarrier modulation is a transmission technique that is quite convenient for high data rates in wireless communication. Information symbols are partitioned and parallelly sent over multiple narrowband subchannels. Pulse shaping filters are critically important in multicarrier modulation for determining the characteristics of signal in time and frequency domains. In this thesis, we propose a new pulse shaping approach for multicarrier schemes to increase spectral efficiency in multi-user scenarios. Conventionally, the time-frequency lattice and the prototype filter are designed considering the worst-case of time-varying multipath channel. However, this approach ignores to make use of multi-user diversity and leads to excessive spacings between successive symbols in time and frequency. Unlike the prevalent methods, we investigate user-based filter utilization considering the wireless channel of each user individually to prevent over-design and improve spectral efficiency. Also, this approach is implemented in a denser time-frequency lattice design. Symbols are allowed to be overlapped (depending on time-frequency dispersion of their individual channels) as long as the signal-to-interference ratios (SIRs) observed by all users are kept above a certain level. Employing user-specific filters to enhance SIR of the user exposed to the most interference provides more overlapping flexibility. Therefore, further improvement in spectral efficiency is achieved in our wireless communication system design.
655

Adaptive and convergent methods for large eddy simulation of turbulent combustion

Heye, Colin Russell 16 March 2015 (has links)
In the recent past, LES methodology has emerged as a viable tool for modeling turbulent combustion. LES computes the large scale mixing process accurately, thereby providing a better starting point for small-scale models that describe the combustion process. Significant effort has been made over past decades to improve accuracy and applicability of the LES approach to a wide range of flows, though the current conventions often lack consistency to the problems at hand. To this end, the two main objectives of this dissertation are to develop a dynamic transport equation-based combustion model for large- eddy simulation (LES) of turbulent spray combustion and to investigate grid- independent LES modeling for scalar mixing. Long-standing combustion modeling approaches have shown to be suc- cessful for a wide range of gas-phase flames, however, the assumptions required to derive these formulations are invalidated in the presence of liquid fuels and non-negligible evaporation rates. In the first part of this work, a novel ap- proach is developed to account for these evaporation effects and the resulting multi-regime combustion process. First, the mathematical formulation is de- rived and the numerical implementation in a low-Mach number computational solver is verified against one-dimensional and lab scale, both non-reacting and reacting spray-laden flows. In order to clarify the modeling requirements in LES for spray combustion applications, results from a suite of fully-resolved direct numerical simulations (DNS) of a spray laden planar jet flame are fil- tered at a range of length scales. LES results are then validated against two sets of experimental jet flames, one having a pilot and allowing for reduced chemistry modeling and the second requiring the use of detail chemistry with in situ tabulation to reduce the computational cost of the direct integration of a chemical mechanism. The conventional LES governing equations are derived from a low-pass filtering of the Navier-Stokes equations. In practice, the filter used to derive the LES governing equations is not formally defined and instead, it is assumed that the discretization of LES equations will implicitly act as a low-pass filter. The second part of this study investigates an alternative derivation of the LES governing equations that requires the formal definition of the filtering operator, known as explicitly filtered LES. It has been shown that decoupling the filter- ing operation from the underlying grid allows for the isolation of subfilter-scale modeling errors from numerical discretization errors. Specific to combustion modeling are the aggregate errors associated with modeling sub-filter distribu- tions of scalars that are transported by numerical impacted turbulent fields. Quantities of interest to commonly-used combustion models, including sub- filter scalar variance and filtered scalar dissipation rate, are investigated for both homogeneous and shear-driven turbulent mixing. / text
656

Analysis of acoustic emission in cohesionless soil

Mathiyaparanam, Jeyisanker 01 June 2006 (has links)
Acoustic emission is a widely used nondestructive technique for identification of structural damage. The AE technique relies on transient energy waves generated by the materials during their failure. As for soils, the basic causes of acoustic emission are the mechanisms which are responsible for shearing of soils. Mobilization of shear strength within a soil itself and the interaction of the soil with the adjacent natural or construction materials are directly related to the level of acoustic emission in soils. It is envisioned that acoustic emission signals in deforming soils can be used as an early warning sign in real time landslide-monitoring systems.This thesis study uses a laboratory experimental setup to record the acoustic emission signals emitted during the shearing of cohesionless soils. Several tests were performed with different rates of shearing with parallel (horizontal) and perpendicular (vertical) placement of the AE mote- sensor with respect to the shear plane. Since the original raw signals recorded contain large amounts of noise, it is necessary to de-noise them. The current study uses wavelet and FFT to de-noise the original signals. The filtered signals obtained using wavelet analysis and FFT are compared to determine the suitability of the two techniques. The peak AE values and the time taken to observe an initial visible peak under different conditions are reported in this study. It is observed that relatively faster rates of shearing generate more AE signals compared to slower rates of shearing. In addition, the rapid shearing produces initial visible peak AE activities within a short period of time than in slow rate of shearing.
657

An ensemble Kalman filter module for automatic history matching

Liang, Baosheng, 1979- 29 August 2008 (has links)
The data assimilation process of adjusting variables in a reservoir simulation model to honor observations of field data is known as history matching and has been extensively studied for few decades. However, limited success has been achieved due to the high complexity of the problem and the large computational effort required by the practical applications. An automatic history matching module based on the ensemble Kalman filter is developed and validated in this dissertation. The ensemble Kalman filter has three steps: initial sampling, forecasting through a reservoir simulator, and assimilation. The initial random sampling is improved by the singular value decomposition, which properly selects the ensemble members with less dependence. In this way, the same level of accuracy is achieved through a smaller ensemble size. Four different schemes for the assimilation step are investigated and direct inverse and square root approaches are recommended. A modified ensemble Kalman filter algorithm, which addresses the preference to the ensemble members through a nonequally weighting factor, is proposed. This weighted ensemble Kalman filter generates better production matches and recovery forecasting than those from the conventional ensemble Kalman filter. The proposed method also has faster convergence at the early time period of history matching. Another variant, the singular evolutive interpolated Kalman filter, is also applied. The resampling step in this method appears to improve the filter stability and help the filter to deliver rapid convergence both in model and data domains. This method and the ensemble Kalman filter are effective for history matching and forecasting uncertainty quantification. The independence of the ensemble members during the forecasting step allows the benefit of high-performance computing for the ensemble Kalman filter implementation during automatic history matching. Two-level computation is adopted; distributing ensemble members simultaneously while simulating each member in a parallel style. Such computation yields a significant speedup. The developed module is integrated with reservoir simulators UTCHEM, GEM and ECLIPSE, and has been implemented in the framework Integrated Reservoir Simulation Platform (IRSP). The successful applications to two and three-dimensional cases using blackoil and compositional reservoir cases demonstrate the efficiency of the developed automatic history matching module.
658

Multiple Imputation on Missing Values in Time Series Data

Oh, Sohae January 2015 (has links)
<p>Financial stock market data, for various reasons, frequently contain missing values. One reason for this is that, because the markets close for holidays, daily stock prices are not always observed. This creates gaps in information, making it difficult to predict the following day’s stock prices. In this situation, information during the holiday can be “borrowed” from other countries’ stock market, since global stock prices tend to show similar movements and are in fact highly correlated. The main goal of this study is to combine stock index data from various markets around the world and develop an algorithm to impute the missing values in individual stock index using “information-sharing” between different time series. To develop imputation algorithm that accommodate time series-specific features, we take multiple imputation approach using dynamic linear model for time-series and panel data. This algorithm assumes ignorable missing data mechanism, as which missingness due to holiday. The posterior distributions of parameters, including missing values, is simulated using Monte Carlo Markov Chain (MCMC) methods and estimates from sets of draws are then combined using Rubin’s combination rule, rendering final inference of the data set. Specifically, we use the Gibbs sampler and Forward Filtering and Backward Sampling (FFBS) to simulate joint posterior distribution and posterior predictive distribution of latent variables and other parameters. A simulation study is conducted to check the validity and the performance of the algorithm using two error-based measurements: Root Mean Square Error (RMSE), and Normalized Root Mean Square Error (NRMSE). We compared the overall trend of imputed time series with complete data set, and inspected the in-sample predictability of the algorithm using Last Value Carried Forward (LVCF) method as a bench mark. The algorithm is applied to real stock price index data from US, Japan, Hong Kong, UK and Germany. From both of the simulation and the application, we concluded that the imputation algorithm performs well enough to achieve our original goal, predicting the stock price for the opening price after a holiday, outperforming the benchmark method. We believe this multiple imputation algorithm can be used in many applications that deal with time series with missing values such as financial and economic data and biomedical data.</p> / Thesis
659

Towards an intelligent fuzzy based multimodal two stage speech enhancement system

Abel, Andrew January 2013 (has links)
This thesis presents a novel two stage multimodal speech enhancement system, making use of both visual and audio information to filter speech, and explores the extension of this system with the use of fuzzy logic to demonstrate proof of concept for an envisaged autonomous, adaptive, and context aware multimodal system. The design of the proposed cognitively inspired framework is scalable, meaning that it is possible for the techniques used in individual parts of the system to be upgraded and there is scope for the initial framework presented here to be expanded. In the proposed system, the concept of single modality two stage filtering is extended to include the visual modality. Noisy speech information received by a microphone array is first pre-processed by visually derived Wiener filtering employing the novel use of the Gaussian Mixture Regression (GMR) technique, making use of associated visual speech information, extracted using a state of the art Semi Adaptive Appearance Models (SAAM) based lip tracking approach. This pre-processed speech is then enhanced further by audio only beamforming using a state of the art Transfer Function Generalised Sidelobe Canceller (TFGSC) approach. This results in a system which is designed to function in challenging noisy speech environments (using speech sentences with different speakers from the GRID corpus and a range of noise recordings), and both objective and subjective test results (employing the widely used Perceptual Evaluation of Speech Quality (PESQ) measure, a composite objective measure, and subjective listening tests), showing that this initial system is capable of delivering very encouraging results with regard to filtering speech mixtures in difficult reverberant speech environments. Some limitations of this initial framework are identified, and the extension of this multimodal system is explored, with the development of a fuzzy logic based framework and a proof of concept demonstration implemented. Results show that this proposed autonomous,adaptive, and context aware multimodal framework is capable of delivering very positive results in difficult noisy speech environments, with cognitively inspired use of audio and visual information, depending on environmental conditions. Finally some concluding remarks are made along with proposals for future work.
660

A data-driven approach for personalized drama management

Yu, Hong 21 September 2015 (has links)
An interactive narrative is a form of digital entertainment in which players can create or influence a dramatic storyline through actions, typically by assuming the role of a character in a fictional virtual world. The interactive narrative systems usually employ a drama manager (DM), an omniscient background agent that monitors the fictional world and determines what will happen next in the players' story experience. Prevailing approaches to drama management choose successive story plot points based on a set of criteria given by the game designers. In other words, the DM is a surrogate for the game designers. In this dissertation, I create a data-driven personalized drama manager that takes into consideration players' preferences. The personalized drama manager is capable of (1) modeling the players' preference over successive plot points from the players' feedback; (2) guiding the players towards selected plot points without sacrificing players' agency; (3) choosing target successive plot points that simultaneously increase the player's story preference ratings and the probability of the players selecting the plot points. To address the first problem, I develop a collaborative filtering algorithm that takes into account the specific sequence (or history) of experienced plot points when modeling players' preferences for future plot points. Unlike the traditional collaborative filtering algorithms that make one-shot recommendations of complete story artifacts (e.g., books, movies), the collaborative filtering algorithm I develop is a sequential recommendation algorithm that makes every successive recommendation based on all previous recommendations. To address the second problem, I create a multi-option branching story graph that allows multiple options to point to each plot point. The personalized DM working in the multi-option branching story graph can influence the players to make choices that coincide with the trajectories selected by the DM, while gives the players the full agency to make any selection that leads to any plot point in their own judgement. To address the third problem, the personalized DM models the probability that the players transitioning to each full-length stories and selects target stories that achieve the highest expected preference ratings at every branching point in the story space. The personalized DM is implemented in an interactive narrative system built with choose-your-own-adventure stories. Human study results show that the personalized DM can achieve significantly higher preference ratings than non-personalized DMs or DMs with pre-defined player types, while preserve the players' sense of agency.

Page generated in 0.0847 seconds