• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 64
  • 8
  • 7
  • 5
  • 5
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 105
  • 105
  • 48
  • 27
  • 27
  • 24
  • 16
  • 16
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Estimation and Effects of Imperfect System Parameters on the Performance of Multi-Relay Cooperative Communications Systems

MEHRPOUYAN, HANI 17 September 2012 (has links)
To date the majority of research in the area of cooperative communications focuses on maximizing throughput and reliability while assuming perfect channel state information (CSI) and synchronization. This thesis, seeks to address performance enhancement and system parameter estimation in cooperative networks while relaxing these idealized assumptions. In Chapter 3 the thesis mainly focuses on training-based channel estimation in multi-relay cooperative networks. Channel estimators that are capable of determining the overall channel gains from source to destination antennas are derived. Next, a new low feedback and low complexity scheme is proposed that allows for the coherent combining of signals from multiple relays. Numerical and simulation results show that the combination of the proposed channel estimators and optimization algorithm result in significant performance gains. As communication systems are greatly affected by synchronization parameters, in Chapter 4 the thesis quantitatively analyzes the effects of timing and frequency offset on the performance of communications systems. The modified Cramer-Rao lower bound (MCRLB) undergoing functional transformation, is derived and applied to determine lower bounds on the estimation of signal pulse amplitude and signal-to-noise ratio (SNR) due to timing offset and frequency offset, respectively. In addition, it is shown that estimation of timing and frequency offset can be decoupled in most practical settings. The distributed nature of cooperative relay networks may result in multiple timing and frequency offsets. Chapters 5 and 6 address multiple timing and frequency offset estimation using periodically inserted training sequences in cooperative networks with maximum frequency reuse, i.e., space-division multiple access (SDMA) networks. New closed-form expressions for the Cramer-Rao lower bound (CRLB) for multiple timing and multiple frequency offset estimation for different cooperative protocols are derived. The CRLBs are then applied in a novel way to formulate training sequence design guidelines and determine the effect of network protocol and topology on synchronization parameter estimation. Next, computationally efficient estimators are proposed. Numerical results show that the proposed estimators outperform existing algorithms and reach or approach the CRLB at mid-to-high SNR. When applied to system compensation, simulation results show that application of the proposed estimators allow for synchronized cooperation amongst the nodes within the network. / Thesis (Ph.D, Electrical & Computer Engineering) -- Queen's University, 2010-07-29 16:52:50.272
12

Lower Bound Limit Analysis Applications For Solving Planar Stability Problems In Geomechanics

Bhattacharya, Paramita 09 1900 (has links) (PDF)
Limit analysis based upon the theory of plasticity is one of the very useful numerical techniques to determine the failure loads of different civil and mechanical engineering structures for a material following an associated flow rule. The limiting values of the collapse loads, namely, lower and upper bounds, can be bracketed quite accurately with the application of the lower and upper bound theorems of the limit analysis. With the advancement of the finite elements and different robust optimization techniques, the numerical limit analysis approach in association with finite elements is becoming very popular to assess the stability of various complicated structures. Although two different optimization methods, namely, linear programming and nonlinear programming, have been both successfully implemented by various researchers for solving different stability problems in geomechanics, the linear programming method is employed in the present thesis due to its inherent advantage in implementation and ease in achieving the convergence. The objectives of the present thesis are (i) to improve upon the existing lower bound limit analysis method, in combination with finite elements and linear programming, with an intention of reducing the computational time and the associated memory requirement, and (ii) to apply the existing lower bound finite element limit analysis to various important planar stability problems in geotechnical engineering. With reference to the first objective of the thesis, two new methods have been introduced in this thesis to improve upon the existing computational procedure while solving the geomechanics stability problem with the usage of the limit analysis, finite elements and linear programming. In the first method, namely, the method-I, the order of the yield polygon within the chosen domain is varied, based on the proximity of the stress state to the yield, such that a higher order polygon needs not to be used everywhere in the problem domain. In the second method, the method-II, it has been intended to use only a few selected sides, but not all, of the higher order yield polygon which are being used to linearize the Mohr-Coulomb yield function. The proposed two methods have been applied to compute the ultimate bearing capacity of smooth as well as rough strip footings for various soil frictional angles. It has been noticed that both the proposed new methods reduce the CPU time and the total number of inequality constraints required as compared to the existing lower bound linear programming method used in literature. With reference to the second objective, a few important planar stability problems in geomechanics associated with interference of footings and vertical anchors have been solved in the present thesis. Footings are essentially used to transfer the compressive loads of the super structures to underlying soil media. On the other hand, vertical anchors are used for generating passive supports to retaining walls, sheet piles and bulkheads. A large number of research investigations have been reported in literature to compute the collapse load for a single isolated strip footing and a single vertical anchor. It is a common practice to estimate the bearing capacity of footings or pullout capacity of anchors without considering the effect of interference. There are, however, clear evidences from the available literature that (i) the ultimate bearing capacity of footings, and (ii) the ultimate pullout capacity of anchors, are significantly affected by their interference effect. Based on different available methods, the interference of footings, in a group of two footings as well as an infinite number of multiple footings, has been examined by different researchers in order to compute the ultimate bearing capacity considering the group effect. However, there is no research study to find the ultimate bearing capacity of interfering footings with the usage of the lower bound limit analysis. In the present thesis, the ultimate bearing capacity of two and an infinite number of multiple strip footings placed on sandy soil with horizontal ground surface, has been determined. The analysis has been performed for smooth as well as rough footings. The failure loads for interfering footings are found to be always greater than the single isolated footing. The effect of the footings' interference is expressed in terms of an efficiency factor ( ξγ); where, ξγ is defined as the ratio of the magnitude of failure load for a footing of width B in presence of the other footing to the magnitude of failure load of an isolated strip footing having the same width. The effect of the interference on the failure load (i) for rough footings becomes always greater than smooth footings, (ii) increases with an increase in soil frictional angle φ, and (iii) becomes almost negligible beyond the spacing, S > 3B. It is observed that the failure load for a footing in a group of an infinite number of multiple strip footings becomes always greater than that for two interfering footings. Attempts have been made in this thesis to investigate the group effect of two vertical anchors on their horizontal pullout resistance (PuT). The anchors are considered to be embedded at a certain clear spacing (S) along the same vertical plane. The group effect has been studied separately for anchors embedded in (i) sandy soil, and (ii) undrained clay, respectively. For anchors embedded in clays, an increase of soil cohesion with depth, in a linear fashion, has also been taken into consideration. The magnitude of PuT has been obtained in terms of a group efficiency factor, ηγ for sand and ηc for clay, with respect to the failure load for a single isolated vertical plate with the same H/B. The pullout capacity of a group of two anchors either in sand or in undrained clay becomes quite extensive as compared to a single isolated anchor. The magnitudes of ηγ and ηc become maximum corresponding to a certain critical value of S/B, which has been found to lie generally between 0.5 and 1. The value of ηγ for a given S/B has been found to become larger for greater values of H/B, φ, and δ. For greater values of H/B, the group effect becomes more significant in contributing the pullout resistance. The horizontal pullout capacity of a single isolated vertical anchor embedded in sand in the presence of pseudo static horizontal earthquake body forces has also been determined by using the lower bound finite element limit analysis. The variation of the pullout factor Fγ with changes in the embedment ratio of the smooth and rough anchor plates for different values of horizontal earthquake acceleration coefficient ( αh) has been investigated. The analysis clearly reveals that the pullout resistance decreases quite significantly with an increase in the magnitude of the earthquake acceleration coefficient. For the various problems selected in the present thesis, the failure patterns have also been exclusively drawn in order to understand the development of the plastic zones within the chosen domain for solving a given problem. The results obtained from the analysis, for the various problems taken up in this thesis, have been thoroughly compared with those reported in literature.
13

International Housing Markets, Unconventional Monetary Policy and the Zero Lower Bound

Huber, Florian, Punzi, Maria Teresa 25 January 2016 (has links) (PDF)
In this paper we propose a time-varying parameter VAR model for the housing market in the United States, the United Kingdom, Japan and the Euro Area. For these four economies, we answer the following research questions: (i) How can we evaluate the stance of monetary policy when the policy rate hits the zero lower bound? (ii) Can developments in the housing market still be explained by policy measures adopted by central banks? (iii) Did central banks succeed in mitigating the detrimental impact of the financial crisis on selected housing variables? We analyze the relationship between unconventional monetary policy and the housing markets by using the shadow interest rate estimated by Krippner (2013b). Our findings suggest that the monetary policy transmission mechanism to the housing market has not changed with the implementation of quantitative easing or forward guidance, and central banks can affect the composition of an investors portfolio through investment in housing. A counterfactual exercise provides some evidence that unconventional monetary policy has been particularly successful in dampening the consequences of the financial crisis on housing markets in the United States, while the effects are more muted in the other countries considered in this study. (authors' abstract) / Series: Department of Economics Working Paper Series
14

The shortage of safe assets in the US investment portfolio: Some international evidence

Huber, Florian, Punzi, Maria Teresa 03 1900 (has links) (PDF)
This paper develops a Bayesian Global VAR (GVAR) model to track the international transmission dynamics of two stylized shocks, namely a supply and demand shock to US-based safe assets. Our main findings can be summarized as follows. First, we find that (positive) supply-sided shocks lead to pronounced increases in economic activity which spills over to foreign countries. The impact of supply-sided shocks can also be seen for other quantities of interest, most notably equity prices and exchange rates in Europe. Second, a demand-sided shock leads to an appreciation of the US dollar and generally lower yields on US securities, forcing investors to shift their portfolios towards foreign fixed income securities. This yields sizable positive effects on US output, equity prices and a general decrease in financial market volatility. / Series: Department of Economics Working Paper Series
15

Global Spillover Effects from Unconventional Monetary Policy During the Crisis

Solís González, Brenda January 2015 (has links)
This work investigates the international spillover effects and transmission channels of Unconventional Monetary Policy (UMP) of major central banks from United States, United Kingdom, Japan and Europe to Latin-American countries. A Global VAR model is estimated to analyze the impact on output, inflation, credit, equity prices and money growth on the selected countries. Results suggest that indeed, there are international spillovers to the region with money growth, stock prices and international reserves as the main transmission channels. In addition, outcomes are different between countries and variables implying not only that transmission channels are not same across the region but also that the effects of the monetary policy are not distributed equally. Furthermore, it is found evidence that for some countries transmission channels may have transformed due to the crisis. Finally, effects of UMP during the crisis were in general positive with exception of Japan indicating that policies from this country brought more costs than benefits to the region. Keywords Zero Lower Bound, Unconventional Monetary Policy, International Spillovers, Global VAR, GVAR.
16

Trajectographie Passive sans manœuvre de l’observateur / Target motion analysis without maneuver of the observer

Clavard, Julien 18 December 2012 (has links)
Les méthodes de trajectographie conventionnelles par mesures d’angle supposent que la source est en mouvement rectiligne uniforme tandis que l’observateur est manœuvrant. Dans cette thèse, nous remettons en cause cette hypothèse en proposant un autre modèle de cinématique de la source : le mouvement circulaire uniforme. Nous prouvons qu’une telle trajectoire est observable à partir d’un observateur en mouvement rectiligne uniforme. Puis, nous étudions l’apport de mesures additionnelles de fréquence ou la faisabilité de la trajectographie par mesures de distances. Le cas d’une source en mouvement rectiligne uniforme et d’un observateur manœuvrant est étudié pour ce dernier type de mesures. Chaque cas donne lieu à une analyse de l’observabilité de la trajectoire de la source et à la mise au point de l’estimateur du maximum de vraisemblance. Nous montrons que ce dernier s’avère le plus souvent efficace. / The conventional bearings-only target motion analysis methods assume that the source is in constant velocity motion (constant speed and heading) while the observer maneuvers. In this thesis, we reassess this hypothesis and propose another model of the kinematics of the source: the constant turn motion (an arc of circle followed at constant speed). We prove that this kind of trajectory is observable by an observer in constant velocity motion. Then, we study the contribution of the addition of frequency measurements or the feasibility of target motion analysis methods that use range only measurements. The case of a source in constant velocity motion with a maneuvering observer is examined for this last type of measurements. Each case leads to an analysis of the observability of the trajectory of the source and to the development of the associated maximum likelihood estimator. We show that this estimator often appears to be efficient.
17

Monetary Policy and the Great Recession

Bundick, Brent January 2014 (has links)
Thesis advisor: Susanto Basu / The Great Recession is arguably the most important macroeconomic event of the last three decades. Prior to the collapse of national output during 2008 and 2009, the United States experienced a sustained period of good economic outcomes with only two mild and short recessions. In addition to the severity of the recession, several characteristics of this recession signify it as as a unique event in the recent economic history of the United States. Some of these unique features include the following: Large Increase in Uncertainty About the Future: The Great Recession and its subsequent slow recovery have been marked by a large increase in uncertainty about the future. Uncertainty, as measured by the VIX index of implied stock market volatility, peaked at the end of 2008 and has remained volatile over the past few years. Many economists and the financial press believe the large increase in uncertainty may have played a role in the Great Recession and subsequent slow recovery. For example, Kocherlakota (2010) states, ``I've been emphasizing uncertainties in the labor market. More generally, I believe that overall uncertainty is a large drag on the economic recovery.'' In addition, Nobel laureate economist Peter Diamond argues, ``What's critical right now is not the functioning of the labor market, but the limits on the demand for labor coming from the great caution on the side of both consumers and firms because of the great uncertainty of what's going to happen next.'' Zero Bound on Nominal Interest Rates: The Federal Reserve plays a key role in offsetting the negative impact of fluctuations in the economy. During normal times, the central bank typically lowers nominal short-term interest rates in response to declines in inflation and output. Since the end of 2008, however, the Federal Reserve has been unable to lower its nominal policy rate due to the zero lower bound on nominal interest rates. Prior to the Great Recession, the Federal Reserve had not encountered the zero lower bound in the modern post-war period. The zero lower bound represents a significant constraint monetary policy's ability to fully stabilize the economy. Unprecedented Use of Forward Guidance: Even though the Federal Reserve remains constrained by the zero lower bound, the monetary authority can still affect the economy through expectations about future nominal policy rates. By providing agents in the economy with forward guidance on the future path of policy rates, monetary policy can stimulate the economy even when current policy rates remain constrained. Throughout the Great Recession and the subsequent recovery, the Federal Reserve provided the economy with explicit statements about the future path of monetary policy. In particular, the central bank has discussed the timing and macroeconomic conditions necessary to begin raising its nominal policy rate. Using this policy tool, the Federal Reserve continues to respond to the state of the economy at the zero lower bound. Large Fiscal Expansion: During the Great Recession, the United States engaged in a very large program of government spending and tax reductions. The massive fiscal expansion was designed to raise national income and help mitigate the severe economic contraction. A common justification for the fiscal expansion is the reduced capacity of the monetary authority to stimulate the economy at the zero lower bound. Many economists argue that the benefits of increasing government spending are significantly higher when the monetary authority is constrained by the zero lower bound. The goal of this dissertation is to better understand how these various elements contributed to the macroeconomic outcomes during and after the Great Recession. In addition to understanding each of the elements above in isolation, a key component of this analysis focuses on the interaction between the above elements. A key unifying theme between all of the elements is the role in monetary policy. In modern models of the macroeconomy, the monetary authority is crucial in determining how a particular economic mechanism affects the macroeconomy. In the first and second chapters, I show that monetary policy plays a key role in offsetting the negative effects of increased uncertainty about the future. My third chapter highlights how assumptions about monetary policy can change the impact of various shocks and policy interventions. For example, suppose the fiscal authority wants to increase national output by increasing government spending. A key calculation in this situation is the fiscal multiplier, which is dollar increase in national income for each dollar of government spending. I show that fiscal multipliers are dramatically affected by the assumptions about monetary policy even if the monetary authority is constrained by the zero lower bound. The unique nature of the elements discussed above makes analyzing their contribution difficult using standard macroeconomic tools. The most popular method for analyzing dynamic, stochastic general equilibrium models of the macroeconomy relies on linearizing the model around its deterministic steady state and examining the local dynamics around that approximation. However, the nature of the unique elements above make it impossible to fully capture dynamics using local linearization methods. For example, the zero lower bound on nominal interest rates often occurs far from the deterministic steady state of the model. Therefore, linearization around the steady state cannot capture the dynamics associated with the zero lower bound. The overall goal of this dissertation is to use and develop tools in computational macroeconomics to help better understand the Great Recession. Each of the chapters outlined below examine at least one of the topics listed above and its impact in explaining the macroeconomics of the Great Recession. In particular, the essays highlight the role of the monetary authority in generating the observed macroeconomic outcomes over the past several years. Can increased uncertainty about the future cause a contraction in output and its components? In joint work with Susanto Basu, my first chapter examines the role of uncertainty shocks in a one-sector, representative-agent, dynamic, stochastic general-equilibrium model. When prices are flexible, uncertainty shocks are not capable of producing business-cycle comovements among key macroeconomic variables. With countercyclical markups through sticky prices, however, uncertainty shocks can generate fluctuations that are consistent with business cycles. Monetary policy usually plays a key role in offsetting the negative impact of uncertainty shocks. If the central bank is constrained by the zero lower bound, then monetary policy can no longer perform its usual stabilizing function and higher uncertainty has even more negative effects on the economy. We calibrate the size of uncertainty shocks using fluctuations in the VIX and find that increased uncertainty about the future may indeed have played a significant role in worsening the Great Recession, which is consistent with statements by policymakers, economists, and the financial press. In sole-authored work, the second chapter continues to explore the interactions between the zero lower bound and increased uncertainty about the future. From a positive perspective, the essay further shows why increased uncertainty about the future can reduce a central bank's ability to stabilize the economy. The inability to offset contractionary shocks at the zero lower bound endogenously generates downside risk for the economy. This increase in risk induces precautionary saving by households, which causes larger contractions in output and inflation and prolongs the zero lower bound episode. The essay also examines the normative implications of uncertainty and shows how monetary policy can attenuate the negative effects of higher uncertainty. When the economy faces significant uncertainty, optimal monetary policy implies further lowering real rates by committing to a higher price-level target. Under optimal policy, the monetary authority accepts higher inflation risk in the future to minimize downside risk when the economy hits the zero lower bound. In the face of large shocks, raising the central bank's inflation target can attenuate much of the downside risk posed by the zero lower bound. In my third chapter, I examine how assumptions about monetary policy affect the economy at the zero lower bound. Even when current policy rates are zero, I argue that assumptions regarding the future conduct of monetary policy are crucial in determining the effects of real fluctuations at the zero lower bound. Under standard Taylor (1993)-type policy rules, government spending multipliers are large, improvements in technology cause large contractions in output, and structural reforms that decrease firm market power are bad for the economy. However, these policy rules imply that the central bank stops responding to the economy at the zero lower bound. This assumption is inconsistent with recent statements and actions by monetary policymakers. If monetary policy endogenously responds to current economic conditions using expectations about future policy, then spending multipliers are much smaller and increases in technology and firm competitiveness remain expansionary. Thus, the model-implied benefits of higher government spending are highly sensitive to the specification of monetary policy. / Thesis (PhD) — Boston College, 2014. / Submitted to: Boston College. Graduate School of Arts and Sciences. / Discipline: Economics.
18

Essays on Macroeconomics and Asset Pricing:

Eiermann, Alexander January 2017 (has links)
Thesis advisor: Peter Ireland / A significant theoretical literature suggests that the effects of open market operations and large scale asset purchases are limited when short-term interest rates are constrained by the zero-lower-bound (ZLB). This view is supported by a growing body of empirical evidence that points to the tepid response of the U.S. economy to extraordinary policy measures implemented by the Federal Reserve (Fed) during the past several years. In the first essay, Effective Monetary Policy at the Zero-Lower-Bound, I show that permanent open market operations (POMOs), defined as financial market interventions that permanently increase the supply of money, remain relevant at the ZLB and can increase output and inflation. Consequently, I argue that the limited success of Fed policy in recent years may be due in part to the fact that it failed to generate sufficient money creation to support economic recovery following the Great Recession. I then demonstrate that conducting POMOs at the ZLB may improve welfare when compared to a broad range of policy regimes, and conclude by conducting a robustness exercise to illustrate that money creation remains relevant at the ZLB when it is not necessarily permanent. With these results in hand, I explore the consequences of Fed QE more directly in a framework asset purchases are an independent instrument of monetary policy. In the second essay, Effective Quantitative Easing at the Zero-Lower-Bound, I show that the observed lack of transmission between U.S. monetary policy and output economic activity a consequence of the fact the Fed engaged in what I define as sterilized QE: temporary asset purchases that have a limited effect on the money supply. Conversely, I show that asset purchase programs geared towards generating sustained increases in the money supply may significantly attenuate output and inflation losses associated with adverse economic shocks and the ZLB constraint. Furthermore, these equilibrium outcomes may be achieved with a smaller volume of asset purchases. My results imply that Fed asset purchase programs designed to offset the observed declines in the U.S. money supply could have been a more effective and efficient means of providing economic stimulus during the recovery from the Great Recession. The third essay—which is joint work with Apollon Fragkiskos, Harold Spilker, and Russ Wermers— titled Buyout Gold: MIDAS Estimators and Private Equity, we develop a new approach to study private equity returns using a data set first introduced in Fragkiskos et al. (2017). Our innovation is that we adopt a mixed data sampling (MIDAS) framework and model quarterly private equity returns as a function of high frequency factor prices. This approach allows us to endogenize time aggregation and use within-period information that may be relevant to pricing private equity returns in a single, parsimonious framework. We find that our MIDAS framework offers superior performance in terms of generating economically meaningful factor loadings and in-sample and out-of-sample fit using index and vintage-level returns when compared with other methods from the literature. Results using fund-level data are mixed, but MIDAS does display a slight edge. Concerning appropriate time-aggregation, we show that there is significant heterogeneity at the vintage level. This implies highly aggregated private equity data may not properly reflect underlying performance in the cross section.
19

Bounds on RF cooperative localization for video capsule endoscopy

Ye, Yunxing 29 April 2013 (has links)
Wireless video capsule endoscopy has been in use for over a decade and it uses radio frequency (RF) signals to transmit approximately fifty five thousands clear pictures of inside the GI tract to the body-mounted sensor array. However, physician has no clue on the exact location of the capsule inside the GI tract to associate it with the pictures showing abnormalities such as bleeding or tumors. It is desirable to use the same RF signal for localization of the VCE as it passes through the human GI tract. In this thesis, we address the accuracy limits of RF localization techniques for VCE localization applications. We present an assessment of the accuracy of cooperative localization of VCE using radio frequency (RF) signals with particular emphasis on localization inside the small intestine. We derive the Cramer-Rao Lower Bound (CRLB) for cooperative location estimators using the received signal strength(RSS) or the time of arrival (TOA) of the RF signal. Our derivations are based on a three-dimension human body model, an existing model for RSS propagation from implant organs to body surface and a TOA ranging error model for the effects of non-homogenity of the human body on TOA of the RF signals. Using models for RSS and TOA errors, we first calculate the 3D CRLB bounds for cooperative localization of the VCE in three major digestive organs in the path of GI tract: the stomach, the small intestine and the large intestine. Then we analyze the performance of localization techniques on a typical path inside the small intestine. Our analysis includes the effects of number of external sensors, the external sensor array topology, number of VCE in cooperation and the random variations in transmit power from the capsule.
20

Barometer-Assisted 3D Indoor WiFi Localization for Smart Devices-Map Selection and Performance Evaluation

Ying, Julang 05 May 2016 (has links)
Recently, indoor localization becomes a hot topic no matter in industry or academic field. Smart phones are good candidates for localization since they are carrying various sensors such as GPS, Wi-Fi, accelerometer, barometer and etc, which can be used to estimate the current location. But there are still many challenges for 3D indoor geolocation using smart phones, among which the map selection and 3D performance evaluation problems are the most common and crucial. In the indoor environment, the popular outdoor Google maps cannot be utilized since we need maps showing the layout of every individual floor. Also, layout of different floors differ from one another. Therefore, algorithms are required to detect whether we are inside or outside a building and determine on which floor we are located so that an appropriate map can be selected accordingly. For Wi-Fi based indoor localization, the performance of location estimation is closely related to the algorithms and deployment that we are using. It is difficult to find out a general approach that can be used to evaluate any localization system. On one hand, since the RF signal will suffer extra loss when traveling through the ceilings between floors, its propagation property will be different from the empirical ones and consequently we should design a new propagation model for 3D scenarios. On the other hand, properties of sensors are unique so that corresponding models are required before we analyze the localization scheme. In-depth investigation on the possible hybrid are also needed in case more than one sensor is operated in the localization system. In this thesis, we firstly designed two algorithms to use GPS signal for detecting whether the smart device is operating inside or outside a building, which is called outdoor-indoor transition detection. We also design another algorithm to use barometer data for determining on which floor are we located, which is considered as a multi-floor transition detection. With three scenarios designed inside the Akwater Kent Laboratory building (AK building) at Worcester Polytechnic Institute (WPI), we collected raw data from an Android phone with a version of 4.3 and conducted experimental analysis based on that. An efficient way to quantitatively evaluate the 3D localization systems is using Cramer-Rao Lower Bound (CRLB), which is considered as the lower bound of the estimated error for any localization system. The characteristics of Wi-Fi and barometer signals are explored and proper models are introduced as a foundation. Then we extended the 2D CRLB into a 3D format so that it can fit the our 3D scenarios. A barometer-assisted CRLB is introduced as an improvement for the existing Wi-Fi Receive Signal Strength (RSS)-only scheme and both of the two schemes are compared with the contours in every scenario and the statistical analysis.

Page generated in 0.0388 seconds