• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 366
  • 65
  • 53
  • 36
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 638
  • 160
  • 88
  • 85
  • 79
  • 77
  • 71
  • 68
  • 53
  • 52
  • 49
  • 48
  • 44
  • 42
  • 42
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Blind Adaptive MIMO-CDMA Receiver with Constant Modulus Criterion in Multipath Channels

Chao, Po-sun 23 July 2008 (has links)
In recent years, demands on all kinds of wireless communications become heavier due to the developments of new services and devices. At the same time, future wireless networks are expected to provide services with high quality and data rate. A possible solution which can attain these objectives is wireless communication systems that use multiple-input multiple-output (MIMO) antennas along with Alamouti¡¦s space-time block code and direct-sequence code division multiple access (DS-CDMA) modulation technique. In such systems, spatial diversity rendered by multiple antennas as well as coding in spatial and time domains are the keys to improve quality of transmission. Many multiuser detection techniques for the space-time block coded CDMA systems have been investigated. In [8], the blind Capon receiver was proposed, which consists of a two-branch filterbank followed by the blind Capon channel estimator. The design of blind Capon receiver is based on linearly constrained minimum variance (LCMV) criterion, which is known to be sensitive to inaccuracies in the acquisition or tracking of the desired user's timing, referred to as mismatch effect. In other words, the LCMV-based receiver may perform undesirably under mismatch effect. In this thesis, we propose a new blind adaptive MIMO-CDMA receiver based on the linearly constrained constant modulus (LCCM) criterion. This work is motivated by the robustness of LCCM approach to the mismatch effect. To reduce the complexity of receiver design, framework of the generalized sidelobe canceller (GSC) associated with the recursive least squares (RLS) algorithm is adopted for implementing the adaptive LCCM MIMO-CDMA filterbank. Based on the GSC-RLS structure, we derive the proposed MIMO CM-GSC-RLS algorithm. For the purpose of comparison, an adaptive implementation of the blind Capon receiver proposed in [8] is also derived, which is referred to as the MIMO MV-GSC-RLS algorithm. We note that the signal model in [8] was constructed under assumption of frequency-flat channels. To obtain a more practical and realistic signal model, in this thesis we extend the system and channel model by including multipath effects in the beginning of our work. In completing this extension, inter-symbol interference (ISI) caused by the special coding scheme of ST-BC will be specifically analyzed. Finally, a full discussion of the multipath signal model will be provided, including necessity of truncating the received signals as well as modifications in the signal model when considering time-varying channels. Via computer simulations, advantages of the proposed scheme will be verified. Compared to the conventional blind Capon receiver, we will show that the performance of the proposed CM-GSC-RLS algorithm is better. This is especially true when mismatch problem is considered in the MIMO-CDMA systems of interest. The proposed scheme show more robustness against the mismatch effects than the conventional blind Capon receiver. Moreover, the benefit resulted by truncating the received signals is also demonstrated, especially for binary phase-shift-keying (BPSK) modulated source symbol. Finally, simulations considering time-varying channels are provided to reveal that our proposed scheme can adapt itself to the time-varying environments appropriately.
162

Automatic Construction of Integrated Testing Model from PIM: Methodology and Prototype

Chen, Ying-ju 17 July 2009 (has links)
The main goal of model testing is to find system errors at the systems analysis and design stage and thereby reduce the cost of software testing and enhance the efficiency of system development. Prior researches proposed an Integrated Method which utilizes the artifacts from the Platform Independent Model (PIM) to construct the test path and constrained class tuples (CCT). These two can then be integrated to construct an Integrated Testing Model (ITM) for determining the test data and test cases for Model-Based Testing. This study develops a methodology which extracting the information from the Sequence Diagram and Class Diagram from the PIM to construct an ITM. The research methodology is articulated using the design science research methodology. A prototype embedded methodology has been developed. A usability evaluation is performed on the prototype to demonstrate its usability in terms of the factors, efficiency and effectiveness. With this methodology, the test paths, CCT and ITM can be generated automatically; thereby reducing the costs and increasing the efficiency of Model-Based Testing.
163

Novel Blind ST-BC MIMO-CDMA Receiver with Adaptive Constant Modulus-GSC-RLS Algorithm in Multipath Channel

Cheng, Ming-Kai 18 August 2009 (has links)
In this thesis, we present a new hybrid pre-coded direct-sequence code division multiple access (DS-CDMA) system framework that use the multiple-input multiple-output (MIMO) antennas along with Alamouti¡¦s space-time block code (ST-BC). In the transmitter, the idea of hybrid pre-coded is exploited. It not only used to counteract the inter-symbol interference (ISI) introduced by the channel fading duo to multipath propagation but also very useful for exacting the phase of channel by appropriate design, which is not adopted in the conventional blind receiver. Under this structure, we propose a new blind adaptive MIMO-CDMA receiver based on the linearly constrained constant modulus (LCCM) criterion. To reduce the complexity of receiver design, framework of the generalized sidelobe canceller (GSC) associated with the recursive least square (RLS) algorithm is adopted for implementing the LCCM MIMO-CDMA receiver, and use gradient method to track the desired user¡¦s amplitude, simultaneously. Via computer simulations, advantages of the proposed scheme will be verified. Compared to the conventional blind Capon receiver, we will show that the performance of the proposed scheme is more robust against inaccuracies in the acquisition of the desired user¡¦s timing.
164

Domain knowledge, uncertainty, and parameter constraints

Mao, Yi 24 August 2010 (has links)
No description available.
165

Semiparametric estimation of unimodal distributions [electronic resource] / by Jason K. Looper.

Looper, Jason K. January 2003 (has links)
Title from PDF of title page. / Document formatted into pages; contains 93 pages. / Thesis (M.S.)--University of South Florida, 2003. / Includes bibliographical references. / Text (Electronic thesis) in PDF format. / ABSTRACT: One often wishes to understand the probability distribution of stochastic data from experiment or computer simulations. However, where no model is given, practitioners must resort to parametric or non-parametric methods in order to gain information about the underlying distribution. Others have used initially a nonparametric estimator in order to understand the underlying shape of a set of data, and then later returned with a parametric method to locate the peaks. However they are interested in estimating spectra, which may have multiple peaks, where in this work we are interested in approximating the peak position of a single-peak probability distribution. One method of analyzing a distribution of data is by fitting a curve to, or smoothing them. Polynomial regression and least-squares fit are examples of smoothing methods. Initial understanding of the underlying distribution can be obscured depending on the degree of smoothing. / ABSTRACT: Problems such as under and oversmoothing must be addressed in order to determine the shape of the underlying distribution.Furthermore, smoothing of skewed data can give a biased estimation of the peak position. We propose two new approaches for statistical mode estimation based on the assumption that the underlying distribution has only one peak. The first method imposes the global constraint of unimodality locally, by requiring negative curvature over some domain. The second method performs a search that assumes a position of the distribution's peak and requires positive slope to the left, and negative slope to the right. / ABSTRACT: Each approach entails a constrained least-squares fit to the raw cumulative probability distribution.We compare the relative efficiencies [12] of finding the peak location of these two estimators for artificially generated data from known families of distributions Weibull, beta, and gamma. Within each family a parameter controls the skewness or kurtosis, quantifying the shapes of the distributions for comparison. We also compare our methods with other estimators such as the kernel-density estimator, adaptive histogram, and polynomial regression. By comparing the effectiveness of the estimators, we can determine which estimator best locates the peak position. We find that our estimators do not perform better than other known estimators. We also find that our estimators are biased. / ABSTRACT: Overall, an adaptation of kernel estimation proved to be the most efficient.The results for the work done in this thesis will be submitted, in a different form, for publication by D.A. Rabson and J.K. Looper. / System requirements: World Wide Web browser and PDF reader. / Mode of access: World Wide Web.
166

Map-Aided GPS Tracking in Urban Areas : Application to Runner Tracking in Sprint Orienteering / Kartstödd GPS-tracking i Urbana Områden

Hallmén, Mathias January 2015 (has links)
The GPS tracking in sprint orienteering is often a poor supplement to the viewer experience during events taking place in urban areas because of multipath effects. Since the GPS tracking of runners is an important means to making the sport more spectator friendly, it is of interest to make it more accurate. In this thesis project, the information provided by the map of a competition is fused with the GPS tracker position measurements and punch time data in a particle filter to create estimates of the runner trajectories. The map is used to create constraints and to predict motion of runners, as well as to create a model of the GPS reliability depending on map position. A simple observation model is implemented, using the map to decide if a GPS measurement is reliable or not depending on the distance to the closest building. A rather complex motion model is developed to predict the runner motion within the constraints given by the map. The results show that given certain conditions the improvements are vast compared to the traditional GPS tracking. The estimates are bound to possible routes, and they are often very good given that alternative route choices are easily separable. It is however principally difficult to generally improve the tracking using this method. Better measurements or observation models are needed in order to receive a fully satisfying tracking.
167

Efficient specification-based testing using incremental techniques

Uzuncaova, Engin 10 October 2012 (has links)
As software systems grow in complexity, the need for efficient automated techniques for design, testing and verification becomes more and more critical. Specification-based testing provides an effective approach for checking the correctness of software in general. Constraint-based analysis using specifications enables checking various rich properties by automating generation of test inputs. However, as specifications get more complex, existing analyses face a scalability problem due to state explosion. This dissertation introduces a novel approach to analyze declarative specifications incrementally; presents a constraint prioritization and partitioning methodology to enable efficient incremental analyses; defines a suite of optimizations to improve the analyses further; introduces a novel approach for testing software product lines; and provides an experimental evaluation that shows the feasibility and scalability of the approach. The key insight behind the incremental technique is declarative slicing, which is a new class of optimizations. The optimizations are inspired by traditional program slicing for imperative languages but are applicable to analyzable declarative languages, in general, and Alloy, in particular. We introduce a novel algorithm for slicing declarative models. Given an Alloy model, our fully automatic tool, Kato, partitions the model into a base slice and a derived slice using constraint prioritization. As opposed to the conventional use of the Alloy Analyzer, where models are analyzed as a whole, we perform analysis incrementally, i.e., using several steps. A satisfying solution to the base slice is systematically extended to generate a solution for the entire model, while unsatisfiability of the base implies unsatisfiability of the entire model. We show how our incremental technique enables different analysis tools and solvers to be used in synergy to further optimize our approach. Compared to the conventional use of the Alloy Analyzer, this means even more overall performance enhancements for solving declarative models. Incremental analyses have a natural application in the software product line domain. A product line is a family of programs built from features that are increments in program functionality. Given properties of features as firstorder logic formulas, we automatically generate test inputs for each product in a product line. We show how to map a formula that specifies a feature into a transformation that defines incremental refinement of test suites. Our experiments using different data structure product lines show that our approach can provide an order of magnitude speed-up over conventional techniques. / text
168

Discrete gate sizing and threshold voltage assignment to optimize power under performance constraints

Singh, Jagmohan 2013 August 1900 (has links)
In today's world, it is becoming increasingly important to be able to design high performance integrated circuits (ICs) and have them run at as low power as possible. Gate sizing and threshold voltage (Vt) assignment optimizations are one of the major contributors to such trade-offs for power and performance of ICs. In fact, the ever increasing design sizes and more aggressive timing requirements make gate sizing and Vt assignment one of the most important CAD problems in physical synthesis. A promising gate sizing optimization algorithm has to satisfy requirements like being scalable to tackle very large design sizes, being able to optimally utilize a large (but finite) number of possible gate configurations available in standard cell library based on different gate sizes and/or threshold voltages (Vt) and/or gate lengths (Lg), and also, being able to handle non-convex cell delays in modern cell libraries. The work in this thesis makes use of the research-oriented infrastructure made available as part of the ISPD (International Symposium on Physical Design) 2012 Gate Sizing Contest that addresses the issues encountered in modern gate sizing problems. We present a two-phase optimization approach where Lagrangian Relaxation is used to formulate the optimization problem. In the first phase, the Lagrangian relaxed subproblem is iteratively solved using a greedy algorithm, while in the second phase, a cell downsizing and Vt upscaling heuristic is employed to further recover power from the timing-feasible and power-optimized sizing solution obtained at the end of first phase. We also propose a multi-core implementation of the first-phase optimizations, which constitute majority of the total runtime, to take advantage of multi-core processors available today. A speedup of the order of 4 to 9 times is seen on different benchmarks as compared to serial implementation when run on a 2 socket 6-core machine. Compared to the winner of ISPD 2012 contest, we further reduce leakage power by 17.21% and runtime by 87.92%, on average, while obtaining feasible sizing solutions on all the benchmark designs. / text
169

Location-based social networking data : doubly-constrained gravity model origin-destination estimation of the urban travel demand for Austin, TX

Cebelak, Meredith Kimberly 20 November 2013 (has links)
Populations and land development have the potential to shift as economies change at a rate that is faster than currently employed for updating a transportation plan for a region. This thesis uses the Foursquare location-based social networking check-in data to analyze the origin-destination travel demand for Austin, Texas. A doubly-constrained gravity model has been employed to create an origin-destination model. This model was analyzed in comparison to a singly-constrained gravity model as well as the Capital Area Metropolitan Planning Organization's 2010 Urban Transportation Study's origin-destination matrices through trip length distributions, the zonal origin-destination flow patterns, and the zonal trip generation and attraction heat maps in an effort to validate the methodology. / text
170

Design and synthesis of core structural intermediates for novel HIV-1 protease inhibitors & synthesis, biological activity and molecular modeling of novel 20S proteasome inhibitors

Avancha, Kiran Kumar Venkata Raja 01 June 2006 (has links)
HIV-1 protease binds to its peptide/protein substrates in extended conformations. Therefore protease inhibitors that are constrained to form extended conformations are likely to produce very active protease inhibitors. This is because they are pre-organized to form favorable interactions with the enzyme environment immediately surrounding the active site. With this hypothesis in mind, we designed a family of structurally related molecules, which contain dipeptide analogs constrained to adopt the extended conformation. Core structural intermediates that are required for the total synthesis of the novel class of HIV-1 protease inhibitors are outlined in Chapter One. Chapter Two discusses the enantioselective synthesis of 2-alkyl-3-nitropropionates (NPA's) that is the part A of the cyclic urea molecule 8, and can also be used as the building block for the synthesis of unnatural beta-amino acids. In conclusion on this project, we were able to successfully achieve the novel enantioselective route for the synthesis of NPA's and also obtain the absolute stereochemistry of one of the NPA's by solving the crystal structure. Various routes were explored for the synthesis of the substituted orthogonally protected geminal diamino acids (OPGDA's) and these were discussed in Chapter Three. Chapter Three also discusses the synthesis of a versatile N-Boc transfer reagent and the applications of it in the synthesis of alpha-helix mimics. The outcomes of this project were the efficient synthesis of oxaziridine (104) and the methods that show how we cannot make the "substituted OPGDA's" which can serve as the guidance for future research on them. The proteasome is cellular machinery that is responsible for the breakdown of the complex proteins that are not required by a living cell. The inhibition of its activity in cancerous cells can promote apoptosis. Chapter Four discusses the synthesis of a new class of 20S proteasome inhibitors, their biological testing and lead optimization by molecular modeling, library synthesis and biological evaluation. In short this project achieves our goal for the synthesis of a novel class of 20S proteasome inhibitors that have a potential to act as drug molecules in the future.

Page generated in 0.0659 seconds