• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1103
  • 351
  • 333
  • 127
  • 88
  • 64
  • 46
  • 32
  • 24
  • 20
  • 12
  • 12
  • 11
  • 9
  • 8
  • Tagged with
  • 2811
  • 330
  • 278
  • 246
  • 235
  • 222
  • 188
  • 173
  • 169
  • 144
  • 127
  • 124
  • 122
  • 122
  • 121
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
451

Mobile Location Method Using Least Range and Clustering Techniques for NLOS Environments

Wang, Chien-chih 09 February 2007 (has links)
The technique of mobile location has become a popular research topic since the number of related applications for the location information is growing rapidly. The decision to make the location of mobile phones under the U.S. Federal Communications Commission (FCC) in 1996 is one of the driving forces to research and provide solutions to it. But, in wireless communication systems, non line of sight (NLOS) propagation is a key and difficult issue to improve mobile location estimation. We propose an efficient location algorithm which can mitigate the influence of NLOS error. First, based on the geometric relationship between known positions of the base stations, the theorem of ¡§Fermat Point¡¨ is utilized to collect the candidate positions (CPs) of the mobile station. Then, a set of weighting parameters are computed using a density-based clustering method. Finally, the location of mobile station is estimated by solving the optimal solution of the weighted objective function. Different distributions of NLOS error models are used to evaluate the performance of this method. Simulation results show that the performance of the least range measure (LRM) algorithm is slightly better than density-based clustering algorithm (DCA), and superior to the range based linear lines of position algorithm (LLOP) and range scaling algorithm (RSA) on location accuracy under different NLOS environments. The simulation results also satisfy the location accuracy demand of Enhanced 911 (E-911).
452

A Structured Segment Tree Approach to Supporting Range Queries in P2P Systems

Huang, Tzu-lun 05 July 2007 (has links)
A Peer-to-Peer system is a distributed system whose component nodes participate in similar roles. Every user node (the peer) can exchange and contribute its resources to another one in the system. Similar to the case that peers may dynamically join and leave the system, the data will also be inserted into and removed from the system dynamically. Given a certain range, a range query will find any data item whose value within the range. For example, a range query can find all the Beatle's works between 1961 and 1968 for us. However, once the range data is distributed over a P2P system through the hash function which has been used largely in many P2P systems, the continuity of the range data is not guaranteed to exist. Therefore, finding the scattered data whose value within a certain range costs much in a P2P system. The Distributed Segment Tree method (DST) preserves the local continuity of the range data at each node by using a segment tree and can break any given range into minimum number of node intervals whose union constitutes the whole requested range. The DST method works based on the Distributed Hash Table (DHT) logic; therefore, it can be applied in any DHT-based P2P system. However, data distribution of the DST method may cause overlapping. When searching a data range, the DST method sends more number of requests than what is really needed. Although the DST method designs the Downward Load Stripping Mechanism, the load on peers still may not be balanced. The main reason of these problems is that the DST method applies the DHT logic to the P2P systems. Therefore, in this thesis, we propose a method called Structured Segment Tree (SST) that does not use the DHT logic but embeds the structure of the segment tree into the P2P systems. In fact, the P2P network topology of an SST is the structure of a segment tree. Unlike a DST, an SST can fully reflect the properties of the original segment tree. Each peer in our proposed P2P system represents a node of a segment tree. Data intervals at the same level are continuous and will not overlap with each other. The union of data intervals at a level with full nodes is totally the whole data range which the P2P system can support. When searching a data range, the SST method sends as many number of requests as needed. In addition, we add sibling links to preserve the spatial locality and speed up the search efficiency. For the issue of load balance, our SST method also performs better than the DST method. From our simulation, we show that the SST method routes less number of peers to locate the requested range data than the DST method. We also show that the load based on our method is more balanced than that based on the DST method.
453

Wet Deposition of Radon Decay Products and its Relation with Long-Range Transported Radon

Yamazawa, H., Matsuda, M., Moriizumi, J., lida, T. 08 1900 (has links)
No description available.
454

A Recursive Relative Prefix Sum Approach to Range Queries in Data Warehouses

Wu¡@, Fa-Jung 07 July 2002 (has links)
Data warehouses contain data consolidated from several operational databases and provide the historical, and summarized data which is more appropriate for analysis than detail, individual records. On-Line Analytical Processing (OLAP) provides advanced analysis tools to extract information from data stored in a Data Warehouse. OLAP is designed to provide aggregate information that can be used to analyze the contents of databases and data warehouses. A range query applies an aggregation operation over all selected cells of an OLAP data cube where the selection is specified by providing ranges of values for numeric dimensions. Range sum queries are very useful in finding trends and in discovering relationships between attributes in the database. There is a method, prefix sum method, promises that any range sum query on a data cube can be answered in constant time by precomputing some auxiliary information. However, it is hampered by its update cost. For today's applications, interactive data analysis applications which provide current or "near current" information will require fast response time and have reasonable update time. Since the size of a data cube is exponential in the number of its dimensions, rebuilding the entire data cube can be very costly and is not realistic. To cope with this dynamic data cube problem, several strategies have been proposed. They all use specific data structures, which require extra storage cost, to response range sum query fast. For example, the double relative prefix sum method makes use of three components: a block prefix array, a relative overlay array and a relative prefix array to store auxiliary information. Although the double relative prefix sum method improves the update cost, it increases the query time. In the thesis, we present a method, called the recursive relative prefix sum method, which tries to provide a compromise between query and update cost. In the recursive relative prefix sum method with k levels, we use a relative prefix array and k relative overlay arrays. From our performance study, we show that the update cost of our method is always less than that of the prefix sum method. In most of cases, the update cost of our method is less than that of the relative prefix sum method. Moreover, in most of cases, the query cost of our method is less than that of the double relative prefix sum method. Compared with the dynamic data cube method, our method has lower storage cost and shorter query time. Consequently, our recursive relative prefix sum method has a reasonable response time for ad hoc range queries on the data cube, while at the same time, greatly reduces the update cost. In some applications, however, updating in some regions may happen more frequently than others. We also provide a solution, called the weighted relative prefix sum} method, for this situation. Therefore, this method can also provide a compromise between the range sum query cost and the update cost, when the update probabilities of different regions are considered.
455

The Hazard Analysis of Leaking Flammable Gas

CHEN, CHIH-HAU 09 July 2002 (has links)
Thanks to the rapid development of variable production patterns of industries, many kinds of chemicals are used. But the chemicals usually come with dangers of causing fires, explosions, and harming people. In the past decade in Taiwan, these chemicals caused many serious industrial disasters. They happened not only in conventional industries but also in semiconductor and chemical industries. And most of them happened due to the leaking of flammable or toxic gases. In the situation of the outbreak of fires, explosions may occur, and they will generally bring about heat radiation, explosive pressure, and energy releasing. And all of these often harm the workers and the environment, and also bring great loss for the factories. In order to prevent the disasters, except for improving the protection and safety equipment, it¡¦s more important to realize how to use effective ways to reduce them. If the gas pipes pass through a densely populated area, when some toxic gas leaks, it will cause fatal dangers which result from the mixing of gas diffusion and air, the flow process of the mixture, concentration of CH4, and temperature distribution, or explosions. After that, some toxic gas with poisonous substances will be released, and it will turn out to be horrible consequences that are beyond our imagination. So it¡¦s really important to do research on gas leaking and gas diffusion. If gas-leaking simulation is applied on the analysis of the leaking of flammable gas and gas diffusion, it¡¦s much more possible to protect the workers from being hurt, keep public safety, and reduce the loss on the wealth of the society. The thesis focuses on building various hazard patterns of gas leaking, gas explosions, and chemicals, etc. From all of these, the initial conditions and the degrees of dangers will be revealed. In the thesis, numerical simulation is used to analyze the density, pressure, speed of the leaking gas and all the distributions in the flow field. The major analysis is about the effects of parameter and the display of concentration distribution, and hazard range.
456

Time lapse HDR: time lapse photography with high dynamic range images

Clark, Brian Sean 29 August 2005 (has links)
In this thesis, I present an approach to a pipeline for time lapse photography using conventional digital images converted to HDR (High Dynamic Range) images (rather than conventional digital or film exposures). Using this method, it is possible to capture a greater level of detail and a different look than one would get from a conventional time lapse image sequence. With HDR images properly tone-mapped for display on standard devices, information in shadows and hot spots is not lost, and certain details are enhanced.
457

A web-based approach to image-based lighting using high dynamic range images and QuickTime object virtual reality

Cuellar, Tamara Melissa 10 October 2008 (has links)
This thesis presents a web-based approach to lighting three-dimensional geometry in a virtual scene. The use of High Dynamic Range (HDR) images for the lighting model makes it possible to convey a greater sense of photorealism than can be provided with a conventional computer generated three-point lighting setup. The use of QuickTime ™ Object Virtual Reality to display the three-dimensional geometry offers a sophisticated user experience and a convenient method for viewing virtual objects over the web. With this work, I generate original High Dynamic Range images for the purpose of image-based lighting and use the QuickTime ™ Object Virtual Reality framework to creatively alter the paradigm of object VR for use in object lighting. The result is two scenarios: one that allows for the virtual manipulation of an object within a lit scene, and another with the virtual manipulation of light around a static object. Future work might include the animation of High Dynamic Range image-based lighting, with emphasis on such features as depth of field and glare generation.
458

Valuation and hedging of Himalaya option

Shao, Hua-chin 19 September 2007 (has links)
The first option has been publicly traded for more than 30 years. With the progress of time, despite the European option is still the exchange-traded option. But evolved through the years, the European option has not meet people's needs, so exotic option was born. Similarly, the pricing model, from the traditional closed-form solution (under the Black-Scholes assumption), now commonly used binomial trees, finite difference, or by using the Monte Carlo simulation. The main impact of the following factors: the first, with the complexity of the option contract - from single asset to multi-assets, from the plain vanilla option to the path-dependent option, it is more difficult to find the closed-form solution of the option. Second, with the development of personal computers, making numerical computing is no longer a difficult task. It is precisely these two front reason, there will be the birth of this article. Himalaya option is also an exotic options. With the multi-assets and path dependent features, we want to find a closed-form solution is very difficult. Under multi-assets situation, the binomial tree and finite difference will be time-consuming calculation. Therefore, this paper is using Monte Carlo simulation of reasons. In this paper, we use Monte Carlo simulation to pricing Himalaya option, which includes several variance reduction techniques used to reduce sample variance. Finally, when pricing completed, we try to do a simple study to option hedging.
459

The Efficacy of Model-Free and Model-Based Volatility Forecasting: Empirical Evidence in Taiwan

Tzang, Shyh-weir 14 January 2009 (has links)
This dissertation consists of two chapters that examine the construction of financial market volatility indexes and their forecasting efficiency across predictive regression models. Each of the chapter is devoted to diferent volatility measures which are related and evaluated in thframework of forecasting regressions. The first chapter studies the sampling and liquidity issues in constructing volatility indexes, VIX and VXO, in emerging options market like Taiwan. VXO and VIX have been widely used to measure the 22-day forward volatility of the market. However, for an emerging market, VXO and VIX are difficult to measure with accuracy when tradings of the second and next to second nearby options are illiquid. The chapter proposes four methods to sample the option prices across liquidity proxies ¡V five different days of rollover rules ¡V for option trades to construct volatility index series. The paper finds that, based on the sampling method of the average of all midpoints of bid and ask quote option prices, the volatility indexes constructed by minute-tick data have less missing data and more efficient in volatility forecast than the method suggested by CBOE. Additionally, illiquidity in emerging options market does not, based on different rollover rules, lead to substantial biases in the forecasting effectiveness of the volatility indexes.Finally, the forecasting ability of VIX, in terms of naive forecasts and forecasting regressions, is superior to VXO in Taiwan. The second chapter uses high-frequency intraday volatility as a benchmark to measure the efficacy of model-free and model-based econometric models. The realized volatility computed from intraday data has been widely regarded as a more accurate proxy for market volatility than squared daily returns. The chapter adopts several time series models to assess the fore-casting efficiency of future realized volatility in Taiwan stock market. The paper finds that, for 1-day directional accuracy forecast performance, semiparametric fractional autoregressive model (SEMIFAR, Beran and Ocker, 2001) ranks highest with 78.52% hit accuracy, followed by multiplicative error model (MEM, Engle, 2002), and augmented GJR-GARCH model. For 1-day forecasting errors evaluated by root mean squared errors (RMSE), GJR-GARCH model augmented with high-low range volatility ranks highest, followed by SEMIFAR and MEM model, both of which, however, outperform augmented GJR-GARCH by the measure of mean absolute value (MAE) and p-statistics (Blair et al., 2001).
460

Studies on N-Heterocyclic Compounds

Armugam, S 03 1900 (has links)
The thesis entitled "Studies on N-Hetero cyclic Compounds: (a) Reaction of 5,6,7,8-Tetrahydroisoquinolines with Vilsmeier Reagent and (b) Amide Induced in situ Alkylation of 5,6-Dihydroisoquinolines" is presented in two parts. Part I involves a study of the Vilsmeier reaction of 4-cyano-1,3-dihydroxy-5,6,7,8 tetrahydroisoquinoline derivatives, while Part II concerns the in situ alkylation of l-alkyl-4-cyano-3-methoxy-5,6- dihydroisoquinolines in presence of KNH2/liq.NH3.

Page generated in 0.3063 seconds