• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 22
  • 22
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Comparative Analysis of Tag Estimation Algorithms on RFID EPC Gen-2 Performance

Ferdous, Arundhoti 28 June 2017 (has links)
In a passive radio-frequency identification (RFID) system the reader communicates with the tags using the EPC Global UHF Class 1 Generation 2 (EPC Gen-2) protocol with dynamic framed slotted ALOHA. Due to the unique challenges presented by a low-power, random link, the channel efficiency of even the most modern passive RFID system is less than 40%. Hence, a variety of methods have been proposed to estimate the number of tags in the environment and set the optimal frame size. Some of the algorithms in the literature even claim system efficiency beyond 90%. However, these algorithms require fundamental changes to the underlying protocol framework which makes them ineligible to be used with the current hardware running on the EPC Gen-2 platform and this infrastructure change of the existing industry will cost billions of dollars. Though numerous types of tag estimation algorithms have been proposed in the literature, none had their performance analyzed thoroughly when incorporated with the industry standard EPC Gen-2. In this study, we focus on some of the algorithms which can be utilized on today’s current hardware with minimal modifications. EPC Gen-2 already provides a dynamic platform in adjusting frame sizes based on subsequent knowledge of collision slots in a given frame. We choose some of the popular probabilistic tag estimation algorithms in the literature such as Dynamic Frame Slotted ALOHA (DFSA) – I, and DFSA – II, and rule based algorithms such as two conditional tag estimation (2CTE) method and incorporate them with EPC Gen-2 using different strategies to see if they can significantly improve channel efficiency and dynamicity. The results from each algorithm are also evaluated and compared with the performance of pure EPC Gen-2. It is important to note that while integrating these algorithms with EPC Gen-2 to modify the frame size, the protocol is not altered in any substantial way. We also kept the maximum system efficiency for any MAC layer protocol using DFSA as the upper bound to have an impartial comparison between the algorithms. Finally, we present a novel and comprehensive analysis of the probabilistic tag estimation algorithms (DFSA-I & DFSA-II) in terms of their statistically significant correlations between channel efficiency, algorithm estimation accuracy and algorithm utilization rate as the existing literature only look at channel efficiency with no auxiliary analysis. In this study, we use a scalable and flexible simulation framework and created a light-weight, verifiable Gen-2 simulation tool to measure these performance parameters as it is very difficult, if not impossible, to calculate system performance analytically. This framework can easily be used to test and compare more algorithms in the literature with Gen-2 and other DFSA based approaches.
12

Enhancement of precise underwater object localization

Kaveripakum, S., Chinthaginjala, R., Anbazhagan, R., Alibakhshikenari, M., Virdee, B., Khan, S., Pau, G., See, C.H., Dayoub, I., Livreri, P., Abd-Alhameed, Raed 24 July 2023 (has links)
Yes / Underwater communication applications extensively use localization services for object identification. Because of their significant impact on ocean exploration and monitoring, underwater wireless sensor networks (UWSN) are becoming increasingly popular, and acoustic communications have largely overtaken radio frequency (RF) broadcasts as the dominant means of communication. The two localization methods that are most frequently employed are those that estimate the angle of arrival (AOA) and the time difference of arrival (TDoA). The military and civilian sectors rely heavily on UWSN for object identification in the underwater environment. As a result, there is a need in UWSN for an accurate localization technique that accounts for dynamic nature of the underwater environment. Time and position data are the two key parameters to accurately define the position of an object. Moreover, due to climate change there is now a need to constrain energy consumption by UWSN to limit carbon emission to meet net-zero target by 2050. To meet these challenges, we have developed an efficient localization algorithm for determining an object position based on the angle and distance of arrival of beacon signals. We have considered the factors like sensor nodes not being in time sync with each other and the fact that the speed of sound varies in water. Our simulation results show that the proposed approach can achieve great localization accuracy while accounting for temporal synchronization inaccuracies. When compared to existing localization approaches, the mean estimation error (MEE) and energy consumption figures, the proposed approach outperforms them. The MEEs is shown to vary between 84.2154m and 93.8275m for four trials, 61.2256m and 92.7956m for eight trials, and 42.6584m and 119.5228m for twelve trials. Comparatively, the distance-based measurements show higher accuracy than the angle-based measurements.
13

Portfolio management using computational intelligence approaches : forecasting and optimising the stock returns and stock volatilities with fuzzy logic, neural network and evolutionary algorithms

Skolpadungket, Prisadarng January 2013 (has links)
Portfolio optimisation has a number of constraints resulting from some practical matters and regulations. The closed-form mathematical solution of portfolio optimisation problems usually cannot include these constraints. Exhaustive search to reach the exact solution can take prohibitive amount of computational time. Portfolio optimisation models are also usually impaired by the estimation error problem caused by lack of ability to predict the future accurately. A number of Multi-Objective Genetic Algorithms are proposed to solve the problem with two objectives subject to cardinality constraints, floor constraints and round-lot constraints. Fuzzy logic is incorporated into the Vector Evaluated Genetic Algorithm (VEGA) to but solutions tend to cluster around a few points. Strength Pareto Evolutionary Algorithm 2 (SPEA2) gives solutions which are evenly distributed portfolio along the effective front while MOGA is more time efficient. An Evolutionary Artificial Neural Network (EANN) is proposed. It automatically evolves the ANN's initial values and structures hidden nodes and layers. The EANN gives a better performance in stock return forecasts in comparison with those of Ordinary Least Square Estimation and of Back Propagation and Elman Recurrent ANNs. Adaptation algorithms for selecting a pair of forecasting models, which are based on fuzzy logic-like rules, are proposed to select best models given an economic scenario. Their predictive performances are better than those of the comparing forecasting models. MOGA and SPEA2 are modified to include a third objective to handle model risk and are evaluated and tested for their performances. The result shows that they perform better than those without the third objective.
14

Mean-Variance Portfolio Optimization : Challenging the role of traditional covariance estimation / Effektiv portföljförvaltning : en utvärdering av metoder for kovariansskattning

MARAKBI, ZAKARIA January 2016 (has links)
Ever since its introduction in 1952, the Mean-Variance (MV) portfolio selection theory has remained a centerpiece within the realm of e_cient asset allocation. However, in scienti_c circles, the theory has stirred controversy. A strand of criticism has emerged that points to the phenomenon that Mean-Variance Optimization su_ers from the severe drawback of estimation errors contained in the expected return vector and the covariance matrix, resulting in portfolios that may signi_cantly deviate from the true optimal portfolio. While a substantial amount of e_ort has been devoted to estimating the expected return vector in this context, much less is written about the covariance matrix input. In recent times, however, research that points to the importance of the covariance matrix in MV optimization has emerged. As a result, there has been a growing interest whether MV optimization can be enhanced by improving the estimate of the covariance matrix. Hence, this thesis was set forth by the purpose to investigate whether nancial practitioners and institutions can allocate portfolios consisting of assets in a more e_cient manner by changing the covariance matrix input in mean-variance optimization. In the quest of chieving this purpose, an out-of-sample analysis of MV optimized portfolios was performed, where the performance of ve prominent covariance matrix estimators were compared, holding all other things equal in the MV optimization. The optimization was performed under realistic investment constraints, taking incurred transaction costs into account, and for an investment asset universe ranging from equity to bonds. The empirical _ndings in this study suggest one dominant estimator: the covariance matrix estimator implied by the Gerber Statistic (GS). Speci_cally, by using this covariance matrix estimator in lieu of the traditional sample covariance matrix, the MV optimization rendered more e_cient portfolios in terms of higher Sharpe ratios, higher risk-adjusted returns and lower maximum drawdowns. The outperformance was protruding during recessionary times. This suggests that an investor that employs traditional MVO in quantitative asset allocation can improve their asset picking abilities by changing to the, in theory, more robust GS  ovariance matrix estimator in times of volatile nancial markets.
15

Aspects of Electrical Bioimpedance Spectrum Estimation

Abtahi, Farhad January 2014 (has links)
Electrical bioimpedance spectroscopy (EBIS) has been used to assess the status or composition of various types of tissue, and examples of EBIS include body composition analysis (BCA) and tissue characterisation for skin cancer detection. EBIS is a non-invasive method that has the potential to provide a large amount of information for diagnosis or monitoring purposes, such as the monitoring of pulmonary oedema, i.e., fluid accumulation in the lungs. However, in many cases, systems based on EBIS have not become generally accepted in clinical practice. Possible reasons behind the low acceptance of EBIS could involve inaccurate models; artefacts, such as those from movements; measurement errors; and estimation errors. Previous thoracic EBIS measurements aimed at pulmonary oedema have shown some uncertainties in their results, making it difficult to produce trustworthy monitoring methods. The current research hypothesis was that these uncertainties mostly originate from estimation errors. In particular, time-varying behaviours of the thorax, e.g., respiratory and cardiac activity, can cause estimation errors, which make it tricky to detect the slowly varying behaviour of this system, i.e., pulmonary oedema. The aim of this thesis is to investigate potential sources of estimation error in transthoracic impedance spectroscopy (TIS) for pulmonary oedema detection and to propose methods to prevent or compensate for these errors.   This work is mainly focused on two aspects of impedance spectrum estimation: first, the problems associated with the delay between estimations of spectrum samples in the frequency-sweep technique and second, the influence of undersampling (a result of impedance estimation times) when estimating an EBIS spectrum. The delay between frequency sweeps can produce huge errors when analysing EBIS spectra, but its effect decreases with averaging or low-pass filtering, which is a common and simple method for monitoring the time-invariant behaviour of a system. The results show the importance of the undersampling effect as the main estimation error that can cause uncertainty in TIS measurements.  The best time for dealing with this error is during the design process, when the system can be designed to avoid this error or with the possibility to compensate for the error during analysis. A case study of monitoring pulmonary oedema is used to assess the effect of these two estimation errors. However, the results can be generalised to any case for identifying the slowly varying behaviour of physiological systems that also display higher frequency variations.  Finally, some suggestions for designing an EBIS measurement system and analysis methods to avoid or compensate for these estimation errors are discussed. / <p>QC 20140604</p>
16

Portfolio management using computational intelligence approaches. Forecasting and Optimising the Stock Returns and Stock Volatilities with Fuzzy Logic, Neural Network and Evolutionary Algorithms.

Skolpadungket, Prisadarng January 2013 (has links)
Portfolio optimisation has a number of constraints resulting from some practical matters and regulations. The closed-form mathematical solution of portfolio optimisation problems usually cannot include these constraints. Exhaustive search to reach the exact solution can take prohibitive amount of computational time. Portfolio optimisation models are also usually impaired by the estimation error problem caused by lack of ability to predict the future accurately. A number of Multi-Objective Genetic Algorithms are proposed to solve the problem with two objectives subject to cardinality constraints, floor constraints and round-lot constraints. Fuzzy logic is incorporated into the Vector Evaluated Genetic Algorithm (VEGA) to but solutions tend to cluster around a few points. Strength Pareto Evolutionary Algorithm 2 (SPEA2) gives solutions which are evenly distributed portfolio along the effective front while MOGA is more time efficient. An Evolutionary Artificial Neural Network (EANN) is proposed. It automatically evolves the ANN¿s initial values and structures hidden nodes and layers. The EANN gives a better performance in stock return forecasts in comparison with those of Ordinary Least Square Estimation and of Back Propagation and Elman Recurrent ANNs. Adaptation algorithms for selecting a pair of forecasting models, which are based on fuzzy logic-like rules, are proposed to select best models given an economic scenario. Their predictive performances are better than those of the comparing forecasting models. MOGA and SPEA2 are modified to include a third objective to handle model risk and are evaluated and tested for their performances. The result shows that they perform better than those without the third objective.
17

Improving Survey Methodology Through Matrix Sampling Design, Integrating Statistical Review Into Data Collection, and Synthetic Estimation Evaluation

Seiss, Mark Thomas 13 May 2014 (has links)
The research presented in this dissertation touches on all aspects of survey methodology, from questionnaire design to final estimation. We first approach the questionnaire development stage by proposing a method of developing matrix sampling designs, a design where a subset of questions are administered to a respondent in such a way that the administered questions are predictive of the omitted questions. The proposed methodology compares favorably to previous methods when applied to data collected from a household survey conducted in the Nampula province of Mozambique. We approach the data collection stage by proposing a structured procedure of implementing small-scale surveys in such a way that non-sampling error attributed to data collection is minimized. This proposed methodology requires the inclusion of the statistician in the data editing process during data collection. We implemented the structured procedure during the collection of household survey data in the city of Maputo, the capital of Mozambique. We found indications that the data resulting from the structured procedure is of higher quality than the data with no editing. Finally, we approach the estimation phase of sample surveys by proposing a model-based approach to the estimation of the mean squared error associated with synthetic (indirect) estimates. Previous methodology aggregates estimates for stability, while our proposed methodology allows area-specific estimates. We applied the proposed mean squared error estimation methodology and methods found during literature review to simulated data and estimates from 2010 Census Coverage Measurement (CCM). We found that our proposed mean squared error estimation methodology compares favorably to the previous methods, while allowing for area-specific estimates. / Ph. D.
18

Performance evaluation and enhancement for AF two-way relaying in the presence of channel estimation error

Wang, Chenyuan 30 April 2012 (has links)
Cooperative relaying is a promising diversity achieving technique to provide reliable transmission, high throughput and extensive coverage for wireless networks in a variety of applications. Two-way relaying is a spectrally efficient protocol, providing one solution to overcome the half-duplex loss in one-way relay channels. Moreover, incorporating the multiple-input-multiple-output (MIMO) technology can further improve the spectral efficiency and diversity gain. A lot of related work has been performed on the two-way relay network (TWRN), but most of them assume perfect channel state information (CSI). In a realistic scenario, however, the channel is estimated and the estimation error exists. So in this thesis, we explicitly take into account the CSI error, and investigate its impact on the performance of amplify-and-forward (AF) TWRN where either multiple distributed single-antenna relays or a single multiple-antenna relay station is exploited. For the distributed relay network, we consider imperfect self-interference cancellation at both sources that exchange information with the help of multiple relays, and maximal ratio combining (MRC) is then applied to improve the decision statistics under imperfect signal detection. The system performance degradation in terms of outage probability and average bit-error rate (BER) are analyzed, as well as their asymptotic trend. To further improve the spectral efficiency while maintain the spatial diversity, we utilize the maximum minimum (Max-Min) relay selection (RS), and examine the impact of imperfect CSI on this single RS scheme. To mitigate the negative effect of imperfect CSI, we resort to adaptive power allocation (PA) by minimizing either the outage probability or the average BER, which can be cast as a Geometric Programming (GP) problem. Numerical results verify the correctness of our analysis and show that the adaptive PA scheme outperforms the equal PA scheme under the aggregated effect of imperfect CSI. When employing a single MIMO relay, the problem of robust MIMO relay design has been dealt with by considering the fact that only imperfect CSI is available. We design the MIMO relay based upon the CSI estimates, where the estimation errors are included to attain the robust design under the worst-case philosophy. The optimization problem corresponding to the robust MIMO relay design is shown to be nonconvex. This motivates the pursuit of semidefinite relaxation (SDR) coupled with the randomization technique to obtain computationally efficient high-quality approximate solutions. Numerical simulations compare the proposed MIMO relay with the existing nonrobust method, and therefore validate its robustness against the channel uncertainty. / Graduate
19

Positioning in wireless networks:non-cooperative and cooperative algorithms

Destino, G. (Giuseppe) 06 November 2012 (has links)
Abstract In the last few years, location-awareness has emerged as a key technology for the future development of mobile, ad hoc and sensor networks. Thanks to location information, several network optimization strategies as well as services can be developed. However, the problem of determining accurate location, i.e. positioning, is still a challenge and robust algorithms are yet to be developed. In this thesis, we focus on the development of distance-based non-cooperative and cooperative algorithms, which is derived based on a non-parametric non- Bayesian framework, specifically with a Weighted Least Square (WLS) optimization. From a theoretic perspective, we study the WLS problem and establish the optimality through the relationship with a Maximum Likelihood (ML) estimator. We investigate the fundamental limits and derive the consistency conditions by creating a connection between Euclidean geometry and inference theory. Furthermore, we derive the closed-form expression of a distance-model based Cramér-Rao Lower Bound (CRLB), as well as the formulas, that characterize information coupling in the Fisher information matrix. Non-cooperative positioning is addressed as follows. We propose a novel framework, namely the Distance Contraction, to develop robust non-cooperative positioning techniques. We prove that distance contraction can mitigate the global minimum problem and structured distance contraction yields nearly optimal performance in severe channel conditions. Based on these results, we show how classic algorithms such as the Weighted Centroid (WC) and the Non-Linear Least Square (NLS) can be modified to cope with biased ranging. For cooperative positioning, we derive a novel, low complexity and nearly optimal global optimization algorithm, namely the Range-Global Distance Continuation method, to use in centralized and distributed positioning schemes. We propose an effective weighting strategy to cope with biased measurements, which consists of a dispersion weight that captures the effect of noise while maximizing the diversity of the information, and a geometric-based penalty weight, that penalizes the assumption of bias-free measurements. Finally, we show the results of a positioning test where we employ the proposed algorithms and utilize commercial Ultra-Wideband (UWB) devices. / Tiivistelmä Viime vuosina paikkatietoisuudesta on tullut eräs merkittävä avainteknologia mobiili- ja sensoriverkkojen tulevaisuuden kehitykselle. Paikkatieto mahdollistaa useiden verkko-optimointistrategioiden sekä palveluiden kehittämisen. Kuitenkin tarkan paikkatiedon määrittäminen, esimerkiksi kohteen koordinaattien, on edelleen vaativa tehtävä ja robustit algoritmit vaativat kehittämistä. Tässä väitöskirjassa keskitytään etäisyyspohjaisten, yhteistoiminnallisten sekä ei-yhteistoiminnallisten, algoritmien kehittämiseen. Algoritmit pohjautuvat parametrittömään ei-bayesilaiseen viitekehykseen, erityisesti painotetun pienimmän neliösumman (WLS) optimointimenetelmään. Väitöskirjassa tutkitaan WLS ongelmaa teoreettisesti ja osoitetaan sen optimaalisuus todeksi tarkastelemalla sen suhdetta suurimman todennäköisyyden (ML) estimaattoriin. Lisäksi tässä työssä tutkitaan perustavanlaatuisia raja-arvoja sekä johdetaan yhtäpitävyysehdot luomalla yhteys euklidisen geometrian ja inferenssiteorian välille. Väitöskirjassa myös johdetaan suljettu ilmaisu etäisyyspohjaiselle Cramér-Rao -alarajalle (CRLB) sekä esitetään yhtälöt, jotka karakterisoivat informaation liittämisen Fisherin informaatiomatriisiin. Väitöskirjassa ehdotetaan uutta viitekehystä, nimeltään etäisyyden supistaminen, robustin ei-yhteistoiminnallisen paikannustekniikan perustaksi. Tässä työssä todistetaan, että etäisyyden supistaminen pienentää globaali minimi -ongelmaa ja jäsennetty etäisyyden supistaminen johtaa lähes optimaaliseen suorituskykyyn vaikeissa radiokanavan olosuhteissa. Näiden tulosten pohjalta väitöskirjassa esitetään, kuinka klassiset algoritmit, kuten painotetun keskipisteen (WC) sekä epälineaarinen pienimmän neliösumman (NLS) menetelmät, voidaan muokata ottamaan huomioon etäisyysmittauksen harha. Yhteistoiminnalliseksi paikannusmenetelmäksi johdetaan uusi, lähes optimaalinen algoritmi, joka on kompleksisuudeltaan matala. Algoritmi on etäisyyspohjainen globaalin optimoinnin menetelmä ja sitä käytetään keskitetyissä ja hajautetuissa paikannusjärjestelmissä. Lisäksi tässä työssä ehdotetaan tehokasta painotusstrategiaa ottamaan huomioon mittausharha. Strategia pitää sisällään dispersiopainon, joka tallentaa häiriön aiheuttaman vaikutuksen maksimoiden samalla informaation hajonnan, sekä geometrisen sakkokertoimen, joka rankaisee harhattomuuden ennakko-oletuksesta. Lopuksi väitöskirjassa esitetään tulokset kokeellisista mittauksista, joissa ehdotettuja algoritmeja käytettiin kaupallisissa erittäin laajakaistaisissa (UWB) laitteissa.
20

Designing Random Sample Synopses with Outliers

Lehner, Wolfgang, Rosch, Philip, Gemulla, Rainer 12 August 2022 (has links)
Random sampling is one of the most widely used means to build synopses of large datasets because random samples can be used for a wide range of analytical tasks. Unfortunately, the quality of the estimates derived from a sample is negatively affected by the presence of 'outliers' in the data. In this paper, we show how to circumvent this shortcoming by constructing outlier-aware sample synopses. Our approach extends the well-known outlier indexing scheme to multiple aggregation columns.

Page generated in 0.0791 seconds