231 |
Intersection of algebraic plane curves : some results on the (monic) integer transfinite diameterHilmar, Jan January 2008 (has links)
Part I discusses the problem of determining the set of intersection points, with corresponding multiplicities, of two algebraic plane curves. We derive an algorithm based on the Euclidean Algorithm for polynomials and show how to use it to find the intersection points of two given curves. We also show that an easy proof of Bézout’s Theorem follows. We then discuss how, for curves with rational coefficients, this algorithm can bemodified to find the intersection points with coordinates expressed in terms of algebraic extensions of the rational numbers. Part II deals with the problem of determining the (monic) integer transfinite diameter of a given real interval. We show how this problem relates to the problem of determining the structure of the spectrum of normalised leading coefficients of polynomials with integer coefficients and all roots in the given interval. We then find dense regions of this spectrum for a number of intervals and discuss algorithms for finding discrete subsets of the spectrum for the interval [0,1]. This leads to an improvement in the known upper bound for the integer transfinite diameter. Finally, we discuss the connection between the infimum of the spectrum and the monic integer transfinite diameter.
|
232 |
Magellan Recorder Data Recovery AlgorithmsScott, Chuck, Nussbaum, Howard, Shaffer, Scott 10 1900 (has links)
International Telemetering Conference Proceedings / October 25-28, 1993 / Riviera Hotel and Convention Center, Las Vegas, Nevada / This paper describes algorithms implemented by the Magellan High Rate Processor to recover radar data corrupted by the failure of an onboard tape recorder that dropped bits. For data with error correction coding, an algorithm was developed that decodes data in the presence of bit errors and missing bits. For the SAR data, the algorithm takes advantage of properties in SAR data to locate corrupted bits and reduce there effects on downstream processing. The algorithms rely on communication approaches, including an efficient tree search and the Viterbi algorithm to maintain the required throughput rate.
|
233 |
A repeatable procedure to determine a representative average rail profileRegehr, Sean 17 November 2016 (has links)
The planning and specification of rail grinding activities using measured rail profiles normally involves a comparison between the existing and desired rail profiles within a rail segment. In current practice, a somewhat subjective approach is used to select a measured profile – usually located near the midpoint of the segment – that represents the profiles throughout the rail segment. An automated procedure was developed to calculate a representative average (mean) rail profile for a rail segment using industry-standard rail profile data. The procedure was verified by comparing the calculated average to an expected profile. The procedure was then validated by comparing the calculated average profiles of 42 in-service rail segments (10 tangents and 32 curved segments) to the corresponding subjectively chosen median rail profiles for each segment. Overall, the validation results indicated that the coordinates comprising the mean and median profiles differed by less than one percent on average. As expected, stronger agreement was observed for tangent rail segments compared to curved rail segments. Thus, the validation demonstrated that the procedure produces comparable results to current practice while improving the objectivity and repeatability of the decisions that support rail-grinding activities. / February 2017
|
234 |
Mining and Managing Neighbor-Based Patterns in Data StreamsYang, Di 09 January 2012 (has links)
The current data-intensive world is continuously producing huge volumes of live streaming data through various kinds of electronic devices, such as sensor networks, smart phones, GPS and RFID systems. To understand these data sources and thus better leverage them to serve human society, the demands for mining complex patterns from these high speed data streams have significantly increased in a broad range of application domains, such as financial analysis, social network analysis, credit fraud detection, and moving object monitoring. In this dissertation, we present a framework to tackle the mining and management problem for the family of neighbor-based patterns in data streams, which covers a broad range of popular pattern types, including clusters, outliers, k-nearest neighbors and others. First, we study the problem of efficiently executing single neighbor-based pattern mining queries. We propose a general optimization principle for incremental pattern maintenance in data streams, called "Predicted Views". This general optimization principle exploits the "predictability" of sliding window semantics to eliminate both the computational and storage effort needed for handling the expiration of stream objects, which usually constitutes the most expensive operations for incremental pattern maintenance. Second, the problem of multiple query optimization for neighbor-based pattern mining queries is analyzed, which aims to efficiently execute a heavy workload of neighbor-based pattern mining queries using shared execution strategies. We present an integrated pattern maintenance strategy to represent and incrementally maintain the patterns identified by queries with different query parameters within a single compact structure. Our solution realizes fully shared execution of multiple queries with arbitrary parameter settings. Third, the problem of summarization and matching for neighbor-based patterns is examined. To solve this problem, we first propose a summarization format for each pattern type. Then, we present computation strategies, which efficiently summarize the neighbor-based patterns either during or after the online pattern extraction process. Lastly, to compare patterns extracted on different time horizon of the stream, we design an efficient matching mechanism to identify similar patterns in the stream history for any given pattern of interest to an analyst. Our comprehensive experimental studies, using both synthetic as well as real data from domains of stock trades and moving object monitoring, demonstrate superiority of our proposed strategies over alternate methods in both effectiveness and efficiency.
|
235 |
Change detection for activity recognitionBashir, Sulaimon A. January 2017 (has links)
Activity Recognition is concerned with identifying the physical state of a user at a particular point in time. Activity recognition task requires the training of classification algorithm using the processed sensor data from the representative population of users. The accuracy of the generated model often reduces during classification of new instances due to the non-stationary sensor data and variations in user characteristics. Thus, there is a need to adapt the classification model to new user haracteristics. However, the existing approaches to model adaptation in activity recognition are blind. They continuously adapt a classification model at a regular interval without specific and precise detection of the indicator of the degrading performance of the model. This approach can lead to wastage of system resources dedicated to continuous adaptation. This thesis addresses the problem of detecting changes in the accuracy of activity recognition model. The thesis developed a classifier for activity recognition. The classifier uses three statistical summaries data that can be generated from any dataset for similarity based classification of new samples. The weighted ensemble combination of the classification decision from each statistical summary data results in a better performance than three existing benchmarked classification algorithms. The thesis also presents change detection approaches that can detect the changes in the accuracy of the underlying recognition model without having access to the ground truth label of each activity being recognised. The first approach called `UDetect' computes the change statistics from the window of classified data and employed statistical process control method to detect variations between the classified data and the reference data of a class. Evaluation of the approach indicates a consistent detection that correlates with the error rate of the model. The second approach is a distance based change detection technique that relies on the developed statistical summaries data for comparing new classified samples and detects any drift in the original class of the activity. The implemented approach uses distance function and a threshold parameter to detect the accuracy change in the classifier that is classifying new instances. Evaluation of the approach yields above 90% detection accuracy. Finally, a layered framework for activity recognition is proposed to make model adaptation in activity recognition informed using the developed techniques in this thesis.
|
236 |
Multiple sequence alignment pomocí genetických algoritmů / Multiple sequence alignment using genetic algorithmsPátek, Zdeněk January 2012 (has links)
Title: Multiple sequence alignment using genetic algorithms Author: Zdeněk Pátek Department: Department of Software and Computer Science Education Supervisor: RNDr. František Mráz, CSc. Abstract: The thesis adresses the problem of multiple sequence alignment (MSA). It contains the specication of the proposed method MSAMS that allows to find motifs in biological sequences, to split sequences to blocks using the motifs, to solve MSA on the blocks and nally to assemble the global alignment from the aligned blocks and motifs. Motif search and MSA are both solved using genetic algorithms. The thesis describes the implementation of the method, conguration of its settings, benchmarking on the BAliBASE database and comparison to the ClustalW program. Experimental results showed that MSAMS can discover better alignments than ClustalW. Keywords: multiple sequence alignment, motif nding, genetic algorithms, ClustalW
|
237 |
On the Classification of Groups Generated by Automata with 4 States over a 2-Letter AlphabetCaponi, Louis 24 March 2014 (has links)
The class of groups generated by automata have been a source of many counterexamples in group theory. At the same time it is connected to other branches of mathematics, such as analysis, holomorphic dynamics, combinatorics, etc. A question that naturally arises is finding the ways to classify these groups. The task of a complete classification and understanding at the moment seems to be too ambitious, but it is reasonable to concentrate on some smaller subclasses of this class. One approach is to consider groups generated by small automata: the automata with k states over d-letter alphabet (so called, (k,d)-automata) with small values of k and d. Certain steps in this directions have been made already: All groups generated by (2,2)-automata have been classified, and groups generated by (3,2)-automata were studied. In this work we study the class of groups generated by (4,2)-automata. More specifically, we partition all such automata into equivalence classes up to symmetry and minimal symmetry (symmetric and minimally symmetric automata naturally generate isomorphic groups) and classify completely all finite groups generated by automata in this class. We also list all classes generating abelian groups. Another important result of the project is developing a database of (4,2)-automata and computational routines that represent a new effective tool for the search for (4,2)-automata generating groups with specific properties, which hopefully will lead to finding counterexamples of certain conjectures.
|
238 |
DEVELOPMENT OF AN ALGORITHM TO GUIDE A MULTI-POLE DIAGNOSTIC CATHETER FOR IDENTIFYING THE LOCATION OF ATRIAL FIBRILLATION SOURCESUnknown Date (has links)
Atrial Fibrillation (AF) is a debilitating heart rhythm disorder affecting over 2.7 million people in the US and over 30 million people worldwide annually. It has a high correlation with causing a stroke and several other risk factors, resulting in increased mortality and morbidity rate. Currently, the non-pharmocological therapy followed to control AF is catheter ablation, in which the tissue surrounding the pulmonary veins (PVs) is cauterized (called the PV isolation - PVI procedure) aims to block the ectopic triggers originating from the PVs from entering the atrium. However, the success rate of PVI with or without other anatomy-based lesions is only 50%-60%.
A major reason for the suboptimal success rate is the failure to eliminate patientspecific non-PV sources present in the left atrium (LA), namely reentry source (a.k.a. rotor source) and focal source (a.k.a. point source). It has been shown from several animal and human studies that locating and ablating these sources significantly improves the long-term success rate of the ablation procedure. However, current technologies to locate these sources posses limitations with resolution, additional/special hardware requirements, etc. In this dissertation, the goal is to develop an efficient algorithm to locate AF reentry and focal sources using electrograms recorded from a conventionally used high-resolution multi-pole diagnostic catheter. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2019. / FAU Electronic Theses and Dissertations Collection
|
239 |
Neuro/fuzzy speed control of induction motorsKhiyo, Sargon, University of Western Sydney, College of Science, Technology and Environment, School of Engineering and Industrial Design January 2002 (has links)
The thesis involved the design, implementation and testing of a second order neuro-fuzzy controller for the speed control of an AC induction motor, and a comparison of the neuro-fuzzy controller's performance with that of the PI algorithm. It was found experimentally, that the operating temperature of the AC induction motor affected the ability of the PI controller to maintain the set speed. The linear PI algorithm approximation was observed to produce transient speed responses when sudden changes in load occurred. The neuro-fuzzy design was found to be quite involved in the initial design stages. However, after the initial design, it was a simple matter of fine-tuning the algorithm, to optimize performance for any parameter variations of the motor due to temperature or due to sudden changes in load. The neuro-fuzzy algorithm can be developed utilising one of two methods. The first method utilises sensor-less control by detailed modeling of the induction motor; where all varying parameters of the motor are modeled mathematically. This involves using differential equations, and representing them in the form of system response block diagrams. When the overall plant transfer function is known, a fuzzy PI algorithm can be utilised to control the processes of the plant. The second method involves modeling the overall output response as a second order system. Raw data can then be generated in a text file format, providing control data according to the modeled second order system. Using the raw data, development software such as FuzzyTECH is utilised to perform supervised learning, so to produce the knowledge base for the overall system. This method was utilised in this thesis and compared to the conventional PI algorithm. The neuro-fuzzy algorithm implemented on a PLC was found to provide better performance than the PI algorithm implemented on the same PLC. It provided also in the added flexibility for further fine-tuning and avoided the need for rigorous mathematical manipulation of linear equations / Master of Engineering (Hons)
|
240 |
Techniques in Secure Chaos Communication.Lau, Yuu Seng, lauje@rocketmail.com January 2006 (has links)
In today's climate of increased criminal attacks on the privacy of personal or confidential data over digital communication systems, a more secure physical communication link is required. Chaotic signals which have bifurcation behavior (depending on some initial condition) can readily be exploited to enhance the security of communication systems. A chaotic generator produces disordered sequences that provide very good auto- and cross- correlation properties similar to those of random white noise. This would be an important feature in multiple access environments. These sequences are used to scramble data in spread spectrum systems as they can produce low co-channel interference, hence improve the system capacity and performance. The chaotic signal can be created from only a single mathematical relationship and is neither restricted in length nor is repetitive/ cyclic. On the other hand, with the progress in digital signal processing and digital hardware, there has been an increased interest in using adaptive algorithms to improve the performance of digital systems. Adaptive algorithms provide the system with the ability to self-adjust its coefficients according to the signal condition, and can be used with linear or non-linear systems; hence, they might find application in chaos communication. There has been a lot of literature that proposed the use of LMS adaptive algorithm in the communication arena for a variety of applications such as (but not limited to): channel estimation, channel equalization, demodulation, de-noising, and beamforming. In this thesis, we conducted a study on the application of chaos theory in communication systems as well as the application of adaptive algorithms in chaos communication. The First Part of the thesis tackled the application of chaos theory in com- munication. We examined different types of communication techniques utilizing chaos theory. In particular, we considered chaos shift keying (CSK) and mod- ified kind of logistic map. Then, we applied space-time processing and eigen- beamforming technique to enhance the performance of chaos communication. Following on, we conducted a study on CSK and Chaos-CDMA in conjunction with multi-carrier modulation (MCM) techniques such as OFDM (FFT/ IFFT) and wavelet-OFDM. In the Second Part of the thesis, we tried to apply adaptivity to chaos com- munication. Initially, we presented a study of multi-user detection utilizing an adaptive algorithm in a chaotic CDMA multi-user environment, followed by a study of adaptive beamforming and modified weight-vector adaptive beam- forming over CSK communication. At last, a study of modified time-varying adaptive filtering is presented and a conventional adaptive filtering technique is applied in chaotic signal environment. Twelve papers have been published during the PhD candidature, include two journal papers and ten refereed conference papers.
|
Page generated in 0.0484 seconds