• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2596
  • 912
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 31
  • 27
  • 26
  • Tagged with
  • 5939
  • 1421
  • 871
  • 726
  • 722
  • 668
  • 492
  • 490
  • 479
  • 447
  • 421
  • 414
  • 386
  • 365
  • 340
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Magellan Recorder Data Recovery Algorithms

Scott, Chuck, Nussbaum, Howard, Shaffer, Scott 10 1900 (has links)
International Telemetering Conference Proceedings / October 25-28, 1993 / Riviera Hotel and Convention Center, Las Vegas, Nevada / This paper describes algorithms implemented by the Magellan High Rate Processor to recover radar data corrupted by the failure of an onboard tape recorder that dropped bits. For data with error correction coding, an algorithm was developed that decodes data in the presence of bit errors and missing bits. For the SAR data, the algorithm takes advantage of properties in SAR data to locate corrupted bits and reduce there effects on downstream processing. The algorithms rely on communication approaches, including an efficient tree search and the Viterbi algorithm to maintain the required throughput rate.
232

A repeatable procedure to determine a representative average rail profile

Regehr, Sean 17 November 2016 (has links)
The planning and specification of rail grinding activities using measured rail profiles normally involves a comparison between the existing and desired rail profiles within a rail segment. In current practice, a somewhat subjective approach is used to select a measured profile – usually located near the midpoint of the segment – that represents the profiles throughout the rail segment. An automated procedure was developed to calculate a representative average (mean) rail profile for a rail segment using industry-standard rail profile data. The procedure was verified by comparing the calculated average to an expected profile. The procedure was then validated by comparing the calculated average profiles of 42 in-service rail segments (10 tangents and 32 curved segments) to the corresponding subjectively chosen median rail profiles for each segment. Overall, the validation results indicated that the coordinates comprising the mean and median profiles differed by less than one percent on average. As expected, stronger agreement was observed for tangent rail segments compared to curved rail segments. Thus, the validation demonstrated that the procedure produces comparable results to current practice while improving the objectivity and repeatability of the decisions that support rail-grinding activities. / February 2017
233

Mining and Managing Neighbor-Based Patterns in Data Streams

Yang, Di 09 January 2012 (has links)
The current data-intensive world is continuously producing huge volumes of live streaming data through various kinds of electronic devices, such as sensor networks, smart phones, GPS and RFID systems. To understand these data sources and thus better leverage them to serve human society, the demands for mining complex patterns from these high speed data streams have significantly increased in a broad range of application domains, such as financial analysis, social network analysis, credit fraud detection, and moving object monitoring. In this dissertation, we present a framework to tackle the mining and management problem for the family of neighbor-based patterns in data streams, which covers a broad range of popular pattern types, including clusters, outliers, k-nearest neighbors and others. First, we study the problem of efficiently executing single neighbor-based pattern mining queries. We propose a general optimization principle for incremental pattern maintenance in data streams, called "Predicted Views". This general optimization principle exploits the "predictability" of sliding window semantics to eliminate both the computational and storage effort needed for handling the expiration of stream objects, which usually constitutes the most expensive operations for incremental pattern maintenance. Second, the problem of multiple query optimization for neighbor-based pattern mining queries is analyzed, which aims to efficiently execute a heavy workload of neighbor-based pattern mining queries using shared execution strategies. We present an integrated pattern maintenance strategy to represent and incrementally maintain the patterns identified by queries with different query parameters within a single compact structure. Our solution realizes fully shared execution of multiple queries with arbitrary parameter settings. Third, the problem of summarization and matching for neighbor-based patterns is examined. To solve this problem, we first propose a summarization format for each pattern type. Then, we present computation strategies, which efficiently summarize the neighbor-based patterns either during or after the online pattern extraction process. Lastly, to compare patterns extracted on different time horizon of the stream, we design an efficient matching mechanism to identify similar patterns in the stream history for any given pattern of interest to an analyst. Our comprehensive experimental studies, using both synthetic as well as real data from domains of stock trades and moving object monitoring, demonstrate superiority of our proposed strategies over alternate methods in both effectiveness and efficiency.
234

Change detection for activity recognition

Bashir, Sulaimon A. January 2017 (has links)
Activity Recognition is concerned with identifying the physical state of a user at a particular point in time. Activity recognition task requires the training of classification algorithm using the processed sensor data from the representative population of users. The accuracy of the generated model often reduces during classification of new instances due to the non-stationary sensor data and variations in user characteristics. Thus, there is a need to adapt the classification model to new user haracteristics. However, the existing approaches to model adaptation in activity recognition are blind. They continuously adapt a classification model at a regular interval without specific and precise detection of the indicator of the degrading performance of the model. This approach can lead to wastage of system resources dedicated to continuous adaptation. This thesis addresses the problem of detecting changes in the accuracy of activity recognition model. The thesis developed a classifier for activity recognition. The classifier uses three statistical summaries data that can be generated from any dataset for similarity based classification of new samples. The weighted ensemble combination of the classification decision from each statistical summary data results in a better performance than three existing benchmarked classification algorithms. The thesis also presents change detection approaches that can detect the changes in the accuracy of the underlying recognition model without having access to the ground truth label of each activity being recognised. The first approach called `UDetect' computes the change statistics from the window of classified data and employed statistical process control method to detect variations between the classified data and the reference data of a class. Evaluation of the approach indicates a consistent detection that correlates with the error rate of the model. The second approach is a distance based change detection technique that relies on the developed statistical summaries data for comparing new classified samples and detects any drift in the original class of the activity. The implemented approach uses distance function and a threshold parameter to detect the accuracy change in the classifier that is classifying new instances. Evaluation of the approach yields above 90% detection accuracy. Finally, a layered framework for activity recognition is proposed to make model adaptation in activity recognition informed using the developed techniques in this thesis.
235

Multiple sequence alignment pomocí genetických algoritmů / Multiple sequence alignment using genetic algorithms

Pátek, Zdeněk January 2012 (has links)
Title: Multiple sequence alignment using genetic algorithms Author: Zdeněk Pátek Department: Department of Software and Computer Science Education Supervisor: RNDr. František Mráz, CSc. Abstract: The thesis adresses the problem of multiple sequence alignment (MSA). It contains the specication of the proposed method MSAMS that allows to find motifs in biological sequences, to split sequences to blocks using the motifs, to solve MSA on the blocks and nally to assemble the global alignment from the aligned blocks and motifs. Motif search and MSA are both solved using genetic algorithms. The thesis describes the implementation of the method, conguration of its settings, benchmarking on the BAliBASE database and comparison to the ClustalW program. Experimental results showed that MSAMS can discover better alignments than ClustalW. Keywords: multiple sequence alignment, motif nding, genetic algorithms, ClustalW
236

On the Classification of Groups Generated by Automata with 4 States over a 2-Letter Alphabet

Caponi, Louis 24 March 2014 (has links)
The class of groups generated by automata have been a source of many counterexamples in group theory. At the same time it is connected to other branches of mathematics, such as analysis, holomorphic dynamics, combinatorics, etc. A question that naturally arises is finding the ways to classify these groups. The task of a complete classification and understanding at the moment seems to be too ambitious, but it is reasonable to concentrate on some smaller subclasses of this class. One approach is to consider groups generated by small automata: the automata with k states over d-letter alphabet (so called, (k,d)-automata) with small values of k and d. Certain steps in this directions have been made already: All groups generated by (2,2)-automata have been classified, and groups generated by (3,2)-automata were studied. In this work we study the class of groups generated by (4,2)-automata. More specifically, we partition all such automata into equivalence classes up to symmetry and minimal symmetry (symmetric and minimally symmetric automata naturally generate isomorphic groups) and classify completely all finite groups generated by automata in this class. We also list all classes generating abelian groups. Another important result of the project is developing a database of (4,2)-automata and computational routines that represent a new effective tool for the search for (4,2)-automata generating groups with specific properties, which hopefully will lead to finding counterexamples of certain conjectures.
237

DEVELOPMENT OF AN ALGORITHM TO GUIDE A MULTI-POLE DIAGNOSTIC CATHETER FOR IDENTIFYING THE LOCATION OF ATRIAL FIBRILLATION SOURCES

Unknown Date (has links)
Atrial Fibrillation (AF) is a debilitating heart rhythm disorder affecting over 2.7 million people in the US and over 30 million people worldwide annually. It has a high correlation with causing a stroke and several other risk factors, resulting in increased mortality and morbidity rate. Currently, the non-pharmocological therapy followed to control AF is catheter ablation, in which the tissue surrounding the pulmonary veins (PVs) is cauterized (called the PV isolation - PVI procedure) aims to block the ectopic triggers originating from the PVs from entering the atrium. However, the success rate of PVI with or without other anatomy-based lesions is only 50%-60%. A major reason for the suboptimal success rate is the failure to eliminate patientspecific non-PV sources present in the left atrium (LA), namely reentry source (a.k.a. rotor source) and focal source (a.k.a. point source). It has been shown from several animal and human studies that locating and ablating these sources significantly improves the long-term success rate of the ablation procedure. However, current technologies to locate these sources posses limitations with resolution, additional/special hardware requirements, etc. In this dissertation, the goal is to develop an efficient algorithm to locate AF reentry and focal sources using electrograms recorded from a conventionally used high-resolution multi-pole diagnostic catheter. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2019. / FAU Electronic Theses and Dissertations Collection
238

Neuro/fuzzy speed control of induction motors

Khiyo, Sargon, University of Western Sydney, College of Science, Technology and Environment, School of Engineering and Industrial Design January 2002 (has links)
The thesis involved the design, implementation and testing of a second order neuro-fuzzy controller for the speed control of an AC induction motor, and a comparison of the neuro-fuzzy controller's performance with that of the PI algorithm. It was found experimentally, that the operating temperature of the AC induction motor affected the ability of the PI controller to maintain the set speed. The linear PI algorithm approximation was observed to produce transient speed responses when sudden changes in load occurred. The neuro-fuzzy design was found to be quite involved in the initial design stages. However, after the initial design, it was a simple matter of fine-tuning the algorithm, to optimize performance for any parameter variations of the motor due to temperature or due to sudden changes in load. The neuro-fuzzy algorithm can be developed utilising one of two methods. The first method utilises sensor-less control by detailed modeling of the induction motor; where all varying parameters of the motor are modeled mathematically. This involves using differential equations, and representing them in the form of system response block diagrams. When the overall plant transfer function is known, a fuzzy PI algorithm can be utilised to control the processes of the plant. The second method involves modeling the overall output response as a second order system. Raw data can then be generated in a text file format, providing control data according to the modeled second order system. Using the raw data, development software such as FuzzyTECH is utilised to perform supervised learning, so to produce the knowledge base for the overall system. This method was utilised in this thesis and compared to the conventional PI algorithm. The neuro-fuzzy algorithm implemented on a PLC was found to provide better performance than the PI algorithm implemented on the same PLC. It provided also in the added flexibility for further fine-tuning and avoided the need for rigorous mathematical manipulation of linear equations / Master of Engineering (Hons)
239

Techniques in Secure Chaos Communication.

Lau, Yuu Seng, lauje@rocketmail.com January 2006 (has links)
In today's climate of increased criminal attacks on the privacy of personal or confidential data over digital communication systems, a more secure physical communication link is required. Chaotic signals which have bifurcation behavior (depending on some initial condition) can readily be exploited to enhance the security of communication systems. A chaotic generator produces disordered sequences that provide very good auto- and cross- correlation properties similar to those of random white noise. This would be an important feature in multiple access environments. These sequences are used to scramble data in spread spectrum systems as they can produce low co-channel interference, hence improve the system capacity and performance. The chaotic signal can be created from only a single mathematical relationship and is neither restricted in length nor is repetitive/ cyclic. On the other hand, with the progress in digital signal processing and digital hardware, there has been an increased interest in using adaptive algorithms to improve the performance of digital systems. Adaptive algorithms provide the system with the ability to self-adjust its coefficients according to the signal condition, and can be used with linear or non-linear systems; hence, they might find application in chaos communication. There has been a lot of literature that proposed the use of LMS adaptive algorithm in the communication arena for a variety of applications such as (but not limited to): channel estimation, channel equalization, demodulation, de-noising, and beamforming. In this thesis, we conducted a study on the application of chaos theory in communication systems as well as the application of adaptive algorithms in chaos communication. The First Part of the thesis tackled the application of chaos theory in com- munication. We examined different types of communication techniques utilizing chaos theory. In particular, we considered chaos shift keying (CSK) and mod- ified kind of logistic map. Then, we applied space-time processing and eigen- beamforming technique to enhance the performance of chaos communication. Following on, we conducted a study on CSK and Chaos-CDMA in conjunction with multi-carrier modulation (MCM) techniques such as OFDM (FFT/ IFFT) and wavelet-OFDM. In the Second Part of the thesis, we tried to apply adaptivity to chaos com- munication. Initially, we presented a study of multi-user detection utilizing an adaptive algorithm in a chaotic CDMA multi-user environment, followed by a study of adaptive beamforming and modified weight-vector adaptive beam- forming over CSK communication. At last, a study of modified time-varying adaptive filtering is presented and a conventional adaptive filtering technique is applied in chaotic signal environment. Twelve papers have been published during the PhD candidature, include two journal papers and ten refereed conference papers.
240

Genetic detection with application of time series analysis

呂素慧 Unknown Date (has links)
This article investigates the detection and identification problems for changing of regimes about non-linear time series process. We apply the concept of genetic algorithm and AIC criterion to test the changing of regimes. This way is different from traditional detection methods According to our statistical decision procedure, the mean of moving average and the genetic detection for the underlying time series will be considered to decide change points. Finally, an empirical application about the detection and identification of change points for the Taiwan Business Cycle is illustrated.

Page generated in 0.1102 seconds