681 |
Congestion Control for Adaptive Satellite Communication Systems with Intelligent SystemsVallamsundar, Banupriya January 2007 (has links)
With the advent of life critical and real-time services such as remote operations over satellite, e-health etc, providing the guaranteed minimum level of services at every ground terminal of the satellite communication system has gained utmost priority. Ground terminals and the hub are not equipped with the required intelligence to predict and react to inclement and dynamic weather conditions on its own. The focus of this thesis is to develop intelligent algorithms that would aid in adaptive management of the quality of service at the ground terminal and the gateway level. This is done to adapt both the ground terminal and gateway to changing weather conditions and to attempt to maintain a steady throughput level and Quality of Service (QoS) requirements on queue delay, jitter, and probability of loss of packets.
The existing satellite system employs the First-In-First-Out routing algorithm to control congestion in their networks. This mechanism is not equipped with adequate ability to contend with changing link capacities, a common result due to bad weather and faults and to provide different levels of prioritized service to the customers that satisfies QoS requirements. This research proposes to use the reported strength of fuzzy logic in controlling highly non-linear and complex system such as the satellite communication network. The proposed fuzzy based model when integrated into the satellite gateway provides the needed robustness to the ground terminals to comprehend with varying levels of traffic and dynamic impacts of weather.
|
682 |
Error Detection in Number-Theoretic and Algebraic AlgorithmsVasiga, Troy Michael John January 2008 (has links)
CPU's are unreliable: at any point in a computation, a bit may be altered with some (small) probability. This probability may seem negligible, but for large calculations (i.e., months of CPU time), the likelihood of an error being introduced becomes increasingly significant. Relying on this fact, this thesis defines a statistical measure called robustness, and measures the robustness of several number-theoretic and algebraic algorithms.
Consider an algorithm A that implements function f, such that f has range O and algorithm A has range O' where O⊆O'. That is, the algorithm may produce results which are not in the possible range of the function. Specifically, given an algorithm A and a function f, this thesis classifies the output of A into one of three categories:
1. Correct and feasible -- the algorithm computes the correct result,
2. Incorrect and feasible -- the algorithm computes an incorrect result and this output is in O,
3. Incorrect and infeasible -- the algorithm computes an incorrect result and output is in O'\O.
Using probabilistic measures, we apply this classification scheme to quantify the robustness of algorithms for computing primality (i.e., the Lucas-Lehmer and Pepin tests), group order and quadratic residues.
Moreover, we show that typically, there
will be an "error threshold" above which the algorithm is unreliable (that is, it will rarely give the correct result).
|
683 |
Laser-initiated Coulomb explosion imaging of small moleculesBrichta, Jean-Paul Otto January 2008 (has links)
Momentum vectors of fragment ions produced by the Coulomb explosion of CO2z+ (z = 3 - 6) and CS2z+ (z = 3 - 13) in an intense laser field (~50 fs, 1 x 1015 W/cm2) are determined by the triple coincidence imaging technique. The molecular structure from symmetric and asymmetric explosion channels is reconstructed from the measured momentum vectors using a novel simplex algorithm that can be extended to study larger molecules. Physical parameters such as bend angle and bond lengths are extracted from the data and are qualitatively described using an enhanced ionization model that predicts the laser intensity required for ionization as a function of bond length using classical, over the barrier arguments.
As a way of going beyond the classical model, molecular ionization is examined using a quantum-mechanical, wave function modified ADK method. The ADK model is used to calculate the ionization rates of H2, N2, and CO2 as a function of initial vibrational level of the molecules. A strong increase in the ionization rate, with vibrational level, is found for H2, while N2 and CO2 show a lesser increase. The prospects for using ionization rates as a diagnostic for vibrational level population are assessed.
|
684 |
Intelligent Scheduling of Medical ProceduresSui, Yang January 2009 (has links)
In the Canadian universal healthcare system, public access to care is not limited by monetary or social economic factors. Rather, waiting time is the dominant factor limiting public access to healthcare. Excessive waiting lowers quality of life while waiting, and worsening of condition during the delay, which could lower the effectiveness of the planned operation. Excessive waiting has also been shown to carry economic cost.
At the core of the wait time problem is a resource scheduling and management issue. The scheduling of medical procedures is a complex and difficult task. The goal of research in this thesis is to develop the foundation models and algorithms for a resource optimization system. Such a system will help healthcare administrators intelligently schedule procedures to optimize resource utilization, identify bottlenecks and reduce patient wait times.
This thesis develops a novel framework, the MPSP model, to model medical procedures. The MPSP model is designed to be general and versatile to model a variety of different procedures. The specific procedure modeled in detail in this thesis is the haemodialysis procedure. Solving the MPSP model exactly to obtain guaranteed optimal solutions is computationally expensive and not practical for real-time scheduling. A fast, high quality evolutionary heuristic, gMASH, is developed to quickly solve large problems. The MPSP model and the gMASH heuristic form a foundation for an intelligent medical procedures scheduling and optimization system.
|
685 |
Implementation of the Apriori algorithm for effective item set mining in VigiBaseTM : Project report in Teknisk Fysik 15 hpOlofsson, Niklas January 2010 (has links)
No description available.
|
686 |
On Generating Complex Numbers for FFT and NCO Using the CORDIC Algorithm / Att generera komplexa tal för FFT och NCO med CORDIC-algoritmenAndersson, Anton January 2008 (has links)
This report has been compiled to document the thesis work carried out by Anton Andersson for Coresonic AB. The task was to develop an accelerator that could generate complex numbers suitable for fast fourier transforms (FFT) and tuning the phase of complex signals (NCO). Of many ways to achieve this, the CORDIC algorithm was chosen. It is very well suited since the basic implementation allows rotation of 2D-vectors using only shift and add operations. Error bounds and proof of convergence are derived carefully The accelerator was implemented in VHDL in such a way that all critical parameters were easy to change. Performance measures were extracted by simulating realistic test cases and then compare the output with reference data precomputed with high precision. Hardware costs were estimated by synthesizing a set of different configurations. Utilizing graphs of performance versus cost makes it possible to choose an optimal configuration. Maximum errors were extracted from simulations and seemed rather large for some configurations. The maximum error distribution was then plotted in histograms revealing that the typical error is often much smaller than the largest one. Even after trouble-shooting, the errors still seem to be somewhat larger than what other implementations of CORDIC achieve. However, precision was concluded to be sufficient for targeted applications. / Den här rapporten dokumenterar det examensarbete som utförts av AntonAndersson för Coresonic AB. Uppgiften bestod i att utveckla enaccelerator som kan generera komplexa tal som är lämpliga att använda försnabba fouriertransformer (FFT) och till fasvridning av komplexasignaler (NCO). Det finns en mängd sätt att göra detta men valet föllpå en algoritm kallad CORDIC. Den är mycket lämplig då den kan rotera2D-vektorer godtycklig vinkel med enkla operationer som bitskift ochaddition. Felgränser och konvergens härleds noggrannt. Acceleratorn implementerades i språket VHDL med målet att kritiskaparametrar enkelt skall kunna förändras. Därefter simuleradesmodellen i realistiska testfall och resulteten jämfördes medreferensdata som förberäknats med mycket hög precision. Dessutomsyntetiserades en mängd olika konfigurationer så att prestanda enkeltkan viktas mot kostnad.Ur de koefficienter som erhölls genom simuleringar beräknades detstörsta erhållna felet för en mängd olika konfigurationer. Felenverkade till en början onormalt stora vilket krävde vidareundersökning. Samtliga fel från en konfiguration ritades ihistogramform, vilket visade att det typiska felet oftast varbetydligt mindre än det största. Även efter felsökning verkar acceleratorngenerera tal med något större fel än andra implementationer avCORDIC. Precisionen anses dock vara tillräcklig för avsedda applikationer.
|
687 |
Evolving Cuckoo Search : From single-objective to multi-objectiveLidberg, Simon January 2011 (has links)
This thesis aims to produce a novel multi-objective algorithm that is based on Cuckoo Search by Dr. Xin-She Yang. Cuckoo Search is a promising nature-inspired meta-heuristic optimization algorithm, which currently is only able to solve single-objective optimization problems. After an introduction, a number of theoretical points are presented as a basis for the decision of which algorithms to hybridize Cuckoo Search with. These are then reviewed in detail and verified against current benchmark algorithms to evaluate their efficiency. To test the proposed algorithm in a new setting, a real-world combinatorial problem is used. The proposed algorithm is then used as an optimization engine for a simulation-based system and compared against a current implementation.
|
688 |
Classification of busses and lorries in an automatic road toll system / Klassificering av bussar och lastbilar i ett automatiskt vägtullsystemJarl, Adam January 2003 (has links)
An automatic road toll system enables the passing vehicles to change lanes and no stop is needed for payment. Because of different weight of personal cars, busses, lorries (trucks) and other vehicles, they affect the road in different ways. It is of interest to categorize the vehicles into different classes depending of their weight so that the right fee can be set. An automatic road toll system developed by Combitech Traffic Systems AB (now Kapsch TrafficCom AB), Joenkoping, Sweden, classifies the vehicles with help of a so called height image. This is a three dimensional image produced by two photographs of a vehicle. The photographs displays the same view but are mounted with a little spacing. This spacing makes it possible to create a height image. The existing classification uses only length, width and height to divide vehicles into classes. Vehicles of the same dimensions would then belong to the same class independent of their weight. An important example is busses and lorries (trucks) which often have the same dimensions, but trucks often have greater weight and should therefore require a larger fee. This work describes methods for separating busses from lorries with the help of height images. The methods search for variations in the width and height, and other features specific for busses and lorries respectively.
|
689 |
Investigation of an optimal utilization of Ultra-wide band measurements for position purposesSiripi, Vishnu Vardhan January 2006 (has links)
Ultra wideband (UWB) communication systems refers to systems whose bandwidth is many times greater than the “narrowband” systems (refers to a signal which occupies only small amount of space on the radio spectrum). UWB can be used for indoor, communications for high data rates, or very low data rates for substantial link distances because of the extremely large bandwidth, immune to multi-path fading, penetrations through concrete block or obstacles. UWB can also used for short distance ranging whose applications include asset location in a warehouse, position location for wireless sensor networks, and collision avoidance. In order to verify analytical and simulation results with real-world measurements, the need for experimental UWB systems arises. The Institute of Communications Engineering [IANT] has developed a low-cost experimental UWB positioning system to test UWB based positioning concepts. The mobile devices use the avalanche effect of transistors for simple generation of bi-phase pulses and are TDMA multi-user capable. The receiver is implemented in software and employs coherent cross-correlation with peak detection to localize the mobile unit via Time-Difference-Of-Arrival (TDOA) algorithms. Since the power of a proposed UWB system’s signal spread over a very wide bandwidth, the frequencies allocated to multiple existing narrowband systems may interfere with UWB spectrum. The goal of the filters discussed in this project is to cancel or suppress the interference while not distort the desired signal. To investigate the interference, we develop a algorithm to calculate the interference tones. In this thesis, we assume the interference to be narrowband interference (NBI) modeled as sinusoidal tones with unknown amplitude, frequency and phase. If we known the interference tones then it may be removed using a simple notched filter. Herein, we chose an adaptive filter so that it can adjust the interference tone automatically and cancel. In this thesis I tested adaptive filter technique to cancel interference cancellation (ie) LMS algorithm and Adaptive Noise Cancellation (ANC) technique. In this thesis performance of the both filters are compared.
|
690 |
Extended Information Matrices for Optimal Designs when the Observations are CorrelatedPazman, Andrej, Müller, Werner January 1996 (has links) (PDF)
Regression models with correlated errors lead to nonadditivity of the information matrix. This makes the usual approach of design optimization (approximation with a continuous design, application of an equivalence theorem, numerical calculations by a gradient algorithm) impossible. A method is presented that allows the construction of a gradient algorithm by altering the information matrices through adding of supplementary noise. A heuristic is formulated to circumvent the nonconvexity problem and the method is applied to typical examples from the literature. (author's abstract) / Series: Forschungsberichte / Institut für Statistik
|
Page generated in 0.0577 seconds