• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1694
  • 530
  • 312
  • 259
  • 181
  • 132
  • 116
  • 95
  • 43
  • 20
  • 16
  • 13
  • 12
  • 10
  • 8
  • Tagged with
  • 4000
  • 1139
  • 655
  • 347
  • 346
  • 345
  • 306
  • 304
  • 294
  • 290
  • 286
  • 264
  • 257
  • 244
  • 241
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Airspeed estimation of aircraft using two different models and nonlinear observers

Roser, Alexander, Thunberg, Anton January 2023 (has links)
When operating an aircraft, inaccurate measurements can have devastating consequences. For example, when measuring airspeed using a pitot tube, icing effects and other faults can result in erroneous measurements. Therefore, this master thesis aims to create an alternative method which utilizes known flight mechanical equations and sensor fusion to create an estimate of the airspeed during flight. For validation and generation of flight data, a simulation model developed by SAAB AB, called ARES, is used.  Two models are used to describe the aircraft behavior. One of which is called the dynamic model and utilizes forces acting upon the aircraft body in the equations of motion. The other model, called the kinematic model, instead describes the motion with accelerations of the aircraft body. The measurements used are the angle of attack (AoA), side-slip angle (SSA), GPS velocities, and angular rates from an inertial measurement unit (IMU). The dynamic model assumes that engine thrust and aerodynamic coefficients are already estimated to calculate resulting forces, meanwhile the kinematic model instead uses body fixed accelerations from the IMU. These models are combined with filters to create estimations of the airspeed. The filters used are the extended Kalman filter (EKF) and unscented Kalman filter (UKF). These are combined with the two models to create in total four methods to estimate the airspeed.  The results show no major difference in the performance between the filters except for computational time, for which the EKF has the fastest. Further, the result show similar airspeed estimation performance between the models, but differences can be seen. The kinematic model manages to estimate the wind with higher details and to converge faster, compared to the dynamic model. Both models suffer from an observability problem. This problem entails that the aircraft needs to be maneuvered to excite the AoA and SSA in order for the estimation methods to evaluate the wind, which is crucial for accurate airspeed estimation. The robustness of the dynamic model regarding errors in engine thrust and aerodynamic coefficients are also researched, which shows that the model is quite robust against errors in these values.
242

Naive Bayesian Spam Filters for Log File Analysis

Havens, Russel William 13 July 2011 (has links) (PDF)
As computer system usage grows in our world, system administrators need better visibility into the workings of computer systems, especially when those systems have problems or go down. Most system components, from hardware, through OS, to application server and application, write log files of some sort, be it system-standardized logs such syslog or application specific logs. These logs very often contain valuable clues to the nature of system problems and outages, but their verbosity can make them difficult to utilize. Statistical data mining methods could help in filtering and classifying log entries, but these tools are often out of the reach of administrators. This research tests the effectiveness of three off-the-shelf Bayesian spam email filters (SpamAssassin, SpamBayes and Bogofilter) for effectiveness as log entry classifiers. A simple scoring system, the Filter Effectiveness Scale (FES), is proposed and used to compare these filters. These filters are tested in three stages: 1) the filters were tested with the SpamAssassin corpus, with various manipulations made to the messages, 2) the filters were tested for their ability to differentiate two types of log entries taken from actual production systems, and 3) the filters were trained on log entries from actual system outages and then tested on effectiveness for finding similar outages via the log files. For stage 1, messages were tested with normalized bodies, normalized headers and with each sentence from each message body as a separate message with a standardized message. The impact of each manipulation is presented. For stages 2 and 3, log entries were tested with digits normalized to zeros, with words chained together to various lengths and one or all levels of word chains used together. The impacts of these manipulations are presented. In each of these stages, it was found that these widely available Bayesian content filters were effective in differentiating log entries. Tables of correct match percentages or score graphs, according to the nature of tests and numbers of entries are presented, are presented, and FES scores are assigned to the filters according to the attributes impacting their effectiveness. This research leads to the suggestion that simple, off-the-shelf Bayesian content filters can be used to assist system administrators and log mining systems in sifting log entries to find entries related to known conditions (for which there are example log entries), and to exclude outages which are not related to specific known entry sets.
243

FILTER SAMPLING OF AIRBORNE MICROBIAL AGENTS - EVALUATION OF FILTER MATERIALS FOR PHYSICAL COLLECTION EFFICIENCY, EXTRACTION, AND COMPARISON TO TRADITIONAL BIOAEROSOL SAMPLING

BURTON, NANCY CLARK 08 October 2007 (has links)
No description available.
244

A Model Based Fault Detection and Diagnosis Strategy for Automotive Alternators

D'Aquila, Nicholas January 2018 (has links)
Faulty manufactured alternators lead to commercial and safety concerns when installed in vehicles. Alternators have a major role in the Electrical Power Generation System (EPGS) of vehicles, and a defective alternator will lead to damaging of the battery and other important electric accessories. Therefore, fault detection and diagnosis of alternators can be implemented to quickly and accurately determine the health of an alternator during end of line testing, and not let faulty components leave the manufacturer. The focus of this research is to develop a Model Based Fault Detection and Diagnosis (FDD) strategy for detecting alternator faults during end of line testing. The proposed solution uses Extended Kalman Smooth Variable Structure Filter (EK-SVSF) to detect common alternator faults. A solution using the Dual Extended Kalman Filter (DEKF) is also discussed. The alternator faults were programmatically simulated on alternator measurements. The experimental results prove that both the EK-SVSF and DEKF strategies were very effective in alternator modeling and detecting open diode faults, shorted diode faults, and stator imbalance faults. / Thesis / Master of Applied Science (MASc)
245

Unscented Filter for OFDM Joint Frequency Offset and Channel Estimation

Iltis, Ronald A. 10 1900 (has links)
ITC/USA 2006 Conference Proceedings / The Forty-Second Annual International Telemetering Conference and Technical Exhibition / October 23-26, 2006 / Town and Country Resort & Convention Center, San Diego, California / OFDM is a preferred physical layer for an increasing number of telemetry and LAN applications. However, joint estimation of the multipath channel and frequency offset in OFDM remains a challenging problem. The Unscented Kalman Filter (UKF) is presented to solve the offset/channel tracking problem. The advantages of the UKF are that it is less susceptible to divergence than the EKF, and does not require computation of a Jacobian matrix. A hybrid analysis/simulation approach is developed to rapidly evaluate UKF performance in terms of symbol-error rate and channel/offset error for the 802.11a OFDM format.
246

AN APPLICATION OF THE VIDEO MATCHED FILTERS IN PULSE TELEMETERING RECEIVER

Wentai, Feng, Biao, Li 10 1900 (has links)
International Telemetering Conference Proceedings / October 28-31, 1996 / Town and Country Hotel and Convention Center, San Diego, California / It is well known that the pulse telemetering system whose system equipment is simple is superior to the continuous one in ultilizing signal power. But in designing a pulse telemetering receiver the frequency shift problem is often encountered, the shift often greatly wider than the signal bandwidth is very unfavorable for improving receiver working sensitivity. Either to limit transmitter frequency stability strictly or to adapt AFC system in receiver for tracking carrier wave can solve the problem above, the AFC system method could improve the receiver’s performance, but the equipment is complicated. To what extent the receiver working sensitivity will be effected and how to judge the effection in case of adapting VF matched filter and RF being wideband in receiver are this paper’s emphasis. In this paper the power density spectrum distribution of the white noise which has passed through the non-linear system-the linear detector is analysed theoretically, and the improved working sensitivity of the receiver with video matched filter and its difference sensitivity value to that of the optimal receiver are derived. The tested working sensitivity data of two kind pulse receivers with different RF bands are given and the theoretical calculation results conform well with these data, thus it is proven that adapting video matched filter in pulse receiver is a effective approach for compensating the receiver working sensitivity dropping from RF bandwidth increase.
247

Optimization on H.264 De-blocking Filter

Waheed, Abdul-Mohammed January 2008 (has links)
H.264/AVC is the state-of-the-art video coding standard which promises to achieve same video quality at about half the bit rate of previous standards (H.263, MPEG-2). This tremendous achievement in compression and perceptual quality is due to the inclusion of various innovative tools. These tools are highly complex and data intensive as a result poses very heavy computational burden on the processors. De-blocking filter is one among them, it is the most time consuming part of the H.264/AVC reference decoder. In this thesis, a performance analysis of the de-blocking filter is made on Intel Pentium 4 processor and accordingly various optimization techniques have been studied and implemented. For some techniques statistical analysis of video data is done and according to the results obtained optimization is performed and for other techniques SIMD instructions has been used to achieve the optimization. Comparison of optimized techniques using SIMD with the reference software has shown significant speedup thus contributing to the real time implementation of the de-blocking filter on general purpose platform. / De-blocking Filter is the most time consuming part of the H.264 High Profile decoder. The process of De-block filtering specified in the H.264/AVC standard is sequential thus not computationally optimal. In this thesis various optimization algorithms have been studied and implemented. When compared to JM13.2 boundary strength algorithm, Static and ICME algorithms are quite primitive as a result no performance gain is achieved, in fact there is a decrease in performance. This dismal performance is due to various reasons, prominent among them are increased memory access, unrolling of loop to 4x4 boundary and early detection of intra blocks. When it comes to the optimization algorithms of Edge filtering module both the algorithms (SIMD and fast algorithm) showed significant improvement in performance when compared to JM13.2 edge filtering algorithm. This improvement is mainly due to the parallel filtering operation done in edge filtering module. Therefore, by using SSE2 instructions large speed up could be achieved on general purpose processors like Intel, while keeping the conformance with the standard.
248

Bayesian inference methods for next generation DNA sequencing

Shen, Xiaohu, active 21st century 30 September 2014 (has links)
Recently developed next-generation sequencing systems are capable of rapid and cost-effective DNA sequencing, thus enabling routine sequencing tasks and taking us one step closer to personalized medicine. To provide a blueprint of a target genome, next-generation sequencing systems typically employ the so called shotgun sequencing strategy and oversample the genome with a library of relatively short overlapping reads. The order of nucleotides in the short reads is determined by processing acquired noisy signals generated by the sequencing platforms, and the overlaps between the reads are exploited to assemble the target long genome. Next-generation sequencing utilizes massively parallel array-based technology to speed up the sequencing and reduce the cost. However, accuracy and lengths of the short reads are yet to surpass those provided by the conventional slower and costlier Sanger sequencing method. In this thesis, we first focus on Illumina's sequencing-by-synthesis platform which relies on reversible terminator chemistry and describe the acquired signal by a Hidden Markov Model. Relying on this model and sequential Monte Carlo methods, we develop a parameter estimation and base calling scheme called ParticleCall. ParticleCall is tested on an experimental data set obtained by sequencing phiX174 bacteriophage using Illumina's Genome Analyzer II. The results show that ParticleCall scheme is significantly more computationally efficient than the best performing unsupervised base calling method currently available, while achieving the same accuracy. Having addressed the problem of base calling of short reads, we turn our attention to genome assembly. Assembly of a genome from acquired short reads is a computationally daunting task even in the scenario where a reference genome exists. Errors and gaps in the reference, and perfect repeat regions in the target, further render the assembly challenging and cause inaccuracies. We formulate reference-guided assembly as the inference problem on a bipartite graph and solve it using a message-passing algorithm. The proposed algorithm can be interpreted as the classical belief propagation scheme under a certain prior. Unlike existing state-of-the-art methods, the proposed algorithm combines the information provided by the reads without needing to know reliability of the short reads (so-called quality scores). Relation of the message-passing algorithm to a provably convergent power iteration scheme is discussed. Results on both simulated and experimental data demonstrate that the proposed message-passing algorithm outperforms commonly used state-of-the-art tools, and it nearly achieves the performance of a genie-aided maximum a posteriori (MAP) scheme. We then consider the reference-free genome assembly problem, i.e., the de novo assembly. Various methods for de novo assembly have been proposed in literature, all of whom are very sensitive to errors in short reads. We develop a novel error-correction method that enables performance improvements of de novo assembly. The new method relies on a suffix array structure built on the short reads data. It incorporates a hypothesis testing procedure utilizing the sum of quality information as the test statistic to improve the accuracy of overlap detection. Finally, we consider an inference problem in gene regulatory networks. Gene regulatory networks are highly complex dynamical systems comprising biomolecular components which interact with each other and through those interactions determine gene expression levels, i.e., determine the rate of gene transcription. In this thesis, a particle filter with Markov Chain Monte Carlo move step is employed for the estimation of reaction rate constants in gene regulatory networks modeled by chemical Langevin equations. Simulation studies demonstrate that the proposed technique outperforms previously considered methods while being computationally more efficient. Dynamic behavior of gene regulatory networks averaged over a large number of cells can be modeled by ordinary differential equations. For this scenario, we compute an approximation to the Cramer-Rao lower bound on the mean-square error of estimating reaction rates and demonstrate that, when the number of unknown parameters is small, the proposed particle filter can be nearly optimal. In summary, this thesis presents a set of Bayesian inference methods for base-calling and sequence assembly in next-generation DNA sequencing. Experimental studies shows the advantage of proposed algorithms over traditional methods. / text
249

Real Time Human Tracking in Unconstrained Environments

Gao, Hongzhi January 2011 (has links)
The tabu search particle filter is proposed in this research based on the integration of the modified tabu search metaheuristic optimization and the genetic particle filter. Experiments with this algorithm in real time human tracking applications in unconstrained environments show that it is more robust, accurate and faster than a number of other existing metaheuristic filters, including the evolution particle filter, particle swarm filter, simulated annealing filter, path relink filter and scatter search filter. Quantitative evaluation illustrates that even with only ten particles in the system, the proposed tabu search particle filter has a success rate of 93.85% whereas the success rate of other metaheuristic filters ranged from 68.46% to 17.69% under the same conditions. The accuracy of the proposed algorithm (with ten particles in the tracking system) is 2.69 pixels on average, which is over 3.85 times better than the second best metaheuristic filters in accuracy and 18.13 times better than the average accuracy of all other filters. The proposed algorithm is also the fastest among all metaheuristic filters that have been tested. It achieves approximately 50 frames per second, which is 1.5 times faster than the second fastest algorithm and nineteen times faster than the average speed of all other metaheuristic filters. Furthermore, a unique colour sequence model is developed in this research based on a degenerated form of the hidden Markov model. Quantitative evaluations based on rigid object matching experiments illustrate that the successful matching rate is 5.73 times better than the widely used colour histogram. In terms of speed, the proposed algorithm achieves twice the successful matching rate in about three quarters of the processing time consumed by the colour histogram model. Overall, these results suggest that the two proposed algorithms would be useful in many applications due to their efficiently, accuracy and ability to robustly track people and coloured objects.
250

Development of a New Microporous Filter Method for the Concentration of Viruses from Water

Ikner, Luisa January 2010 (has links)
Waterborne enteric viruses are transmitted via the fecal-oral route and have been isolated from various types of water ranging from sewage to tap water. Water matrices characterized by low levels of organic material (e.g. clean surface water and tap water) contain fewer numbers of viruses than sewage and wastewater effluents. A number of methods have been developed to concentrate, elute (recover), and re-concentrate viruses from water. The goal of this dissertation is two-fold. An extensive review of the literature is provided in Appendix A that focuses on method development in the three aforementioned areas. A review of this detail has not been conducted in over two decades, and as such will contribute to the fields of water quality and environmental virology. Second, a novel and inexpensive method for the concentration of viruses (MS2 coliphage, poliovirus 1, echovirus 1, Coxsackievirus B5, and adenovirus 2) is presented in Appendix B. The method uses a new electropositive filter (comprised of nanoalumina fibers) for the capture of viruses from 20-L volumes of dechlorinated tap water. Average filter retention efficiencies for each of the viruses was ≥ 99%. Viruses that are adsorbed to filters must then be recovered (eluted). A number of inorganic solutions were evaluated for this purpose, the most effective being a moderately alkaline (pH 9.3) glycine buffered-polyphosphate solution. Secondary reconcetration of the eluates was performed using an optimized ultrafiltration method (Centricon Plus-70, Millipore, Billerica, MA), and achieved final concentrates volumes of 3.3 ± 0.3 mL. Total method efficiencies meeting the project recovery goal of ≥ 50% were obtained for each of the tested viruses except for MS2 coliphage at high input titers (45 ± 15%) and adenovirus 2 (14 ± 4%). Appendix C provides the Standard Operating Procedures, sample calculations, and detailed data for the experiments conducted. Appendix D details the steps taken towards optimizing the secondary concentration procedure in effort to meet the 50% recovery goal.

Page generated in 0.109 seconds