• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 447
  • 63
  • 56
  • 52
  • 21
  • 21
  • 10
  • 9
  • 8
  • 6
  • 5
  • 5
  • 3
  • 3
  • 3
  • Tagged with
  • 821
  • 237
  • 159
  • 105
  • 100
  • 96
  • 75
  • 69
  • 65
  • 64
  • 59
  • 58
  • 57
  • 56
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Iterative methods and analytic models for queueing and manufacturing systems. / CUHK electronic theses & dissertations collection

January 1998 (has links)
by Wai Ki Ching. / Thesis (Ph.D.)--Chinese University of Hong Kong, 1998. / Includes bibliographical references (p. 82-87). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web. / Abstracts in English and Chinese.
102

From Model-Based to Data-Driven Discrete-Time Iterative Learning Control

Song, Bing January 2019 (has links)
This dissertation presents a series of new results of iterative learning control (ILC) that progresses from model-based ILC algorithms to data-driven ILC algorithms. ILC is a type of trial-and-error algorithm to learn by repetitions in practice to follow a pre-defined finite-time maneuver with high tracking accuracy. Mathematically ILC constructs a contraction mapping between the tracking errors of successive iterations, and aims to converge to a tracking accuracy approaching the reproducibility level of the hardware. It produces feedforward commands based on measurements from previous iterations to eliminates tracking errors from the bandwidth limitation of these feedback controllers, transient responses, model inaccuracies, unknown repeating disturbance, etc. Generally, ILC uses an a priori model to form the contraction mapping that guarantees monotonic decay of the tracking error. However, un-modeled high frequency dynamics may destabilize the control system. The existing infinite impulse response filtering techniques to stop the learning at such frequencies, have initial condition issues that can cause an otherwise stable ILC law to become unstable. A circulant form of zero-phase filtering for finite-time trajectories is proposed here to avoid such issues. This work addresses the problem of possible lack of stability robustness when ILC uses an imperfect a prior model. Besides the computation of feedforward commands, measurements from previous iterations can also be used to update the dynamic model. In other words, as the learning progresses, an iterative data-driven model development is made. This leads to adaptive ILC methods. An indirect adaptive linear ILC method to speed up the desired maneuver is presented here. The updates of the system model are realized by embedding an observer in ILC to estimate the system Markov parameters. This method can be used to increase the productivity or to produce high tracking accuracy when the desired trajectory is too fast for feedback control to be effective. When it comes to nonlinear ILC, data is used to update a progression of models along a homotopy, i.e., the ILC method presented in this thesis uses data to repeatedly create bilinear models in a homotopy approaching the desired trajectory. The improvement here makes use of Carleman bilinearized models to capture more nonlinear dynamics, with the potential for faster convergence when compared to existing methods based on linearized models. The last work presented here finally uses model-free reinforcement learning (RL) to eliminate the need for an a priori model. It is analogous to direct adaptive control using data to directly produce the gains in the ILC law without use of a model. An off-policy RL method is first developed by extending a model-free model predictive control method and then applied in the trial domain for ILC. Adjustments of the ILC learning law and the RL recursion equation for state-value function updates allow the collection of enough data while improving the tracking accuracy without much safety concerns. This algorithm can be seen as the first step to bridge ILC and RL aiming to address nonlinear systems.
103

Agile Development in Instructional Design: A Case Study at BYU Independent Study

Erickson, Alyssa Jean 01 April 2018 (has links)
Agile development is a software development methodology that originated in 2001 (Beck, et al.). It has since gained wide recognition and use in the software industry, and is characterized by iterative development cycles. Organizations outside of the software industry are also finding ways to adapt Agile development to their contexts. BYU Independent Study (BYUIS) is an online education program at Brigham Young University that provides online courses at the high school and university levels. In April 2016, BYUIS implemented the Agile development process to the design and development of online courses. This thesis is a case study that looks specifically at the adoption of Agile at BYUIS, from its implementation in April 2016 to the time of this study in summer of 2017. The question this qualitative study seeks to answer is as follows: how and why did the adoption of the Agile development methodology to instructional design practices at BYUIS reflect or differ from the 12 principles of Agile development? To answer this research question, the researcher used multiple data sources: semi-structured interviews with three administrators, two production team managers, and three instructional designers; surveys for BYUIS student employees (i.e., scrum team members) after each week of observation; and field note observations of three Agile scrum teams for two weeks each. The data from each of these sources was analyzed through a descriptive coding process and then organized into a thematic network analysis. The Results section analyzes evidence from the interviews, surveys, and observations that reflect or differ from each of the 12 principles of Agile. The Discussion addresses three main issues of implementing Agile at BYUIS: how to accommodate for part-time schedules, the complexity of working on different projects, and how to facilitate communication in scrum teams if co-location is not possible. It also looks at how these three issues could be manifest in other organizations and introduces potential solutions. The researcher then presents suggestions for future research on Agile in instructional design or other contexts.
104

Iterative reconstruction method for three-dimensional non-Cartesian parallel MRI

Jiang, Xuguang 01 May 2011 (has links)
Parallel magnetic resonance imaging (MRI) with non-Cartesian sampling pattern is a promising technique that increases the scan speed using multiple receiver coils with reduced samples. However, reconstruction is challenging due to the increased complexity. Three reconstruction methods were evaluated: gridding, blocked uniform resampling (BURS) and non-uniform FFT (NUFFT). Computer simulations of parallel reconstruction were performed. Root mean square error (RMSE) of the reconstructed images to the simulated phantom were used as image quality criterion. Gridding method showed best RMSE performance. Two type of a priori constraints to reduce noise and artifacts were evaluated: edge preserving penalty, which suppresses noise and aliasing artifact in image while preventing over-smoothness, and object support penalty, which reduces background noise amplification. A trust region based step-ratio method that iteratively calculates the penalty coefficient was proposed for the penalty functions. Two methods to alleviate computation burden were evaluated: smaller over sampling ratio, and interpolation coefficient matrix compression. The performance were individually tested using computer simulations. Edge preserving penalty and object support penalty were shown to have consistent improvement on RMSE. The performance of calculated penalty coefficients on the two penalties were close to the best RMSE. Oversampling ratio as low as 1.125 was shown to have impact of less than one percent on RMSE for the radial sampling pattern reconstruction. The value reduced the three dimensional data requirement to less than 1/5 of what the conventional 2x grid needed. Interpolation matrix compression with compression ratio up to 50 percent showed small impact on RMSE. The proposed method was validated on 25 MR data set from a GE MR scanner. Six image quality metrics were used to evaluate the performance. RMSE, normalized mutual information (NMI) and joint entropy (JE) relative to a reference image from a separate body coil scan were used to verify the fidelity of reconstruction to the reference. Region of interest (ROI) signal to noise ratio (SNR), two-data SNR and background noise were used to validate the quality of the reconstruction. The proposed method showed higher ROI SNR, two-data SNR, and lower background noise over conventional method with comparable RMSE, NMI and JE to the reference image at reduced computer resource requirement.
105

Réduction de dose en scanographie thoracique : évaluation de deux générations d’algorithmes de reconstruction itérative en pathologie respiratoire / Dose reduction in chest CT : Evaluation of two generations of iterative reconstruction algorithms

Pontana, François Ascagne 24 September 2013 (has links)
Parmi les outils de réduction de dose d’exposition en scanner, le plus récent est l’emploi des Reconstructions Itératives (RI). Ces nouveaux algorithmes permettent de corriger de façon répétée les données d’acquisition par modélisation, rendant envisageable la compensation sur les images reconstruites, du bruit engendré par une acquisition scanographique à basse dose. Ce travail a eu pour but d’évaluer, à travers 5 études originales, la performance des RI en scanner thoracique, en particulier leur potentiel de réduction de dose et leurs applications cliniques. La 1ère étude a permis de valider la réduction de bruit sans perte d’information diagnostique sur 32 scanners thoraciques grâce à un algorithme de RI de 1ère génération (IRIS). L’évaluation initiale de cette technique a permis de l’intégrer en pratique clinique donnant lieu à la 2e étude évaluant IRIS chez 80 patients ayant bénéficié de 2 scanners thoraciques successifs dans le cadre d’un suivi. Malgré une réduction de dose de 35% par réduction du milliampérage, IRIS a permis de maintenir une qualité image similaire à celle du scanner initial. L’évaluation d’un algorithme de RI de 2e génération (SAFIRE) a ensuite été réalisée sur des examens acquis à dose encore plus réduite : (a) chez 80 patients étudiés par angioscanner à bas kilovoltage avec 50% de réduction de dose ; et (b) chez 50 patients étudiés en scanner double source avec acquisition simultanée d’images pleine dose et d’images à dose réduite de 60%. Enfin, SAFIRE a été évalué dans une situation clinique particulière, l’embolie pulmonaire, permettant une réduction de dose de 60% sans perte de performance diagnostique. / Among the different tools available to save dose in CT, the most recent option is the use of Iterative Reconstructions (IR) instead of Filtered Back Projection. These new algorithms can correct repeatedly the acquisition by modeling data, making possible the compensation of the noise generated in reconstructed images by a low-dose CT acquisition. The purpose of the present work was to evaluate, through 5 original studies, the performance of IR in chest CT, especially their potential for dose reduction and clinical applications.Based on 32 chest CT examinations, the first study validated the level of noise reduction achievable with a first-generation IR algorithm (IRIS). This initial evaluation allowed us to investigate the performance of IRIS in clinical practice, giving rise to the second IRIS study; the latter evaluated 80 patients who underwent two successive chest CT examinations for monitoring. Despite a 35% dose reduction achieved by reduction of the tube current, IRIS provided a similar image quality in comparison with that of the initial examination. A second-generation IR algorithm (SAFIRE) was then evaluated on examinations obtained at lower dose levels in (a) 80 patients who had undergone low-kilovoltage chest CT angiography with a 50% dose reduction; and (b) 50 patients studied with a dual source CT system providing simultaneously full-dose and low- dose (reduction of 60%) images. Lastly, SAFIRE has been evaluated in the specific context of acute pulmonary embolism where the diagnostic performance of low-dose SAFIRE images was found to be similar to that of full-dose FBP images.
106

A novel iterative reducible ligation strategy for the synthesis of homogeneous gene delivery polypeptides

Ericson, Mark David 01 December 2012 (has links)
The ability to safely delivery efficacious amounts of nucleic acids to cells and tissues remains an important goal for the gene therapy field. Viruses are very efficient at delivering DNA, but safety concerns limit their clinical use. Nonviral vectors are not as efficient at DNA delivery, but have a better safety profile. Limiting the efficaciousness of nonviral vectors are the numerous extra and intracellular barriers that must be overcome for successful DNA delivery in vivo. While single polymers can successfully transfect immortalized cell lines in vitro, multicomponent gene delivery systems are required for delivery in vivo. Key in the development of multicomponent systems is their syntheses. Optimization of a nonviral gene delivery system requires the development of methodologies that incorporate the different components in a controlled fashion, generating homogeneous gene delivery vectors. Such syntheses ensure every polymer has the different components required for successful delivery. The amount of each component and location within the gene delivery system can also be varied systemically, allowing optimization of the vector. The overall scope of this thesis is to develop a chemical method to iteratively couple gene delivery peptides through reducible disulfide bonds. The synthesis of such polypeptides allows the triggered disassembly of a polypeptide polyplexed with DNA upon cellular uptake. To synthesize homogeneous gene delivery polypeptides, a novel iterative reducible ligation strategy was developed, based upon the use of a thiazolidine masked cysteine. Initial studies demonstrated that a thiazolidine could be unmasked to a cysteine in the presence of a disulfide bond without side reaction, though the reported thiazolidine hydrolysis conditions of aqueous methoxyamine were insufficiently robust for high yielding ligations. Discovery of a novel silver trifluoromethanesulfonate hydrolysis led to an efficient process for generating reducible polypeptides, as evidenced in the synthesis of a 4 component polypeptide. Due to the success of the thiazolidine mediated iterative ligation strategy, cysteines were replaced by penicillamines to produce more stable disulfide bonds. The mild thiazolidine hydrolysis and subsequent peptide conjugation reactions led to attempt the iterative ligation strategy on a solid support, eliminating purification steps that lowered the yields in the solution phase methodology. Initial progress at generating gene delivery peptides that could be incorporated into the synthetic strategy included the generation of a tri-orthogonal cysteine protecting scheme that allowed a third cysteine to be derivatized with a targeting ligand or stealthing polymer. Due to the use of terminal cysteines in the iterative ligation strategy, a PEG stealthing polymer could be placed in the center of a polyacridine gene delivery peptide with only a small decrease in the ability to condense and protect DNA during systemic circulation. A convergent synthesis was also developed that was able to synthesize large polypeptides in fewer linear steps. The synthetic methodology of thiazolidine mediated iterative reducible ligation developed in this thesis is important in the gene therapy field as it allows the construction of polypeptides that can be systemically optimized, potentially resulting in highly efficacious nonviral gene delivery.
107

Performance of iterative detection and decoding for MIMO-BICM systems

Yang, Tao, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2006 (has links)
Multiple-input multiple-output (MIMO) wireless technology is an emerging cost- effective approach to offer multiple-fold capacity improvement relative to the conven- tional single-antenna systems. To achieve the capacities of MIMO channels, MIMO bit-interleaved-coded-modulation (BICM) systems with iterative detection and decod- ing (IDD) are studied in this thesis. The research for this dissertation is conducted based on the iterative receivers with convolutional codes and turbo codes. A variety of MIMO detectors, such as a maximum a posteriori probability (MAP) detector, a list sphere detector (LSD) and a parallel interference canceller (PIC) together with a decision statistic combiner (DSC), are studied. The performance of these iterative receivers is investigated via bounding techniques or Monte-Carlos simulations. Moreover, the computational complexities of the components are quantified and compared. The convergence behaviors of the iterative receivers are analyzed via variance trans- fer (VTR) functions and variance exchange graphs (VEGs). The analysis of conver- gence behavior facilitates the finding of components with good matching. For a fast fading channel, we show that the &quotwaterfall region&quot of an iterative receiver can be predicted by VEG. For a slow fading channel, it is shown that the performance of an iterative receiver is essentially limited by the early interception ratio (ECR) which is obtained via simulations. After the transfer properties of the detectors are unveiled, a detection switching (DSW) methodology is proposed and the switching criterion based on cross entropy (CE) is derived. By employing DSW, the performance of an iterative receiver with a list sphere detector (LSD) of a small list size is considerably improved. It is shown that the iterative receiver achieves a performance very close to that with a maximum a posteriori probability (MAP) detector but with a significantly reduced complexity. For an iterative receiver with more than two components, various iteration sched- ules are explored. The schedules are applied in an iterative receiver with PIC-DSC. It is shown that the iterative receiver with a periodic scheduling outperforms that with the conventional scheduling at the same level of complexity.
108

Optimisation of Iterative Multi-user Receivers using Analytical Tools

Shepherd, David Peter, RSISE [sic] January 2008 (has links)
The objective of this thesis is to develop tools for the analysis and optimization of an iterative receiver. These tools can be applied to most soft-in soft-out (SISO) receiver components. For illustration purposes we consider a multi-user DS-CDMA system with forward error correction that employs iterative multi-user detection based on soft interference cancellation and single user decoding. Optimized power levels combined with adaptive scheduling allows for efficient utilization of receiver resources for heavily loaded systems.¶ Metric transfer analysis has been shown to be an accurate method of predicting the convergence behavior of iterative receivers. EXtrinsic Information (EXIT), fidelity (FT) and variance (VT) transfer analysis are well-known methods, however the relationship between the different approaches has not been explored in detail. We compare the metrics numerically and analytically and derive functions to closely approximate the relationship between them. The result allows for easy translation between EXIT, FT and VT methods. Furthermore, we extend the $J$ function, which describes mutual information as a function of variance, to fidelity and symbol error variance, the Rayleigh fading channel model and a channel estimate. These $J$ functions allow the \textit{a priori} inputs to the channel estimator, interference canceller and decoder to be accurately modeled. We also derive the effective EXIT charts which can be used for the convergence analysis and performance predictions of unequal power CDMA systems.¶ The optimization of the coded DS-CDMA system is done in two parts; firstly the received power levels are optimized to minimize the power used in the terminal transmitters, then the decoder activation schedule is optimized such that the multi-user receiver complexity is minimized. The uplink received power levels are optimized for the system load using a constrained nonlinear optimization approach. EXIT charts are used to optimize the power allocation in a multi-user turbo-coded DS-CDMA system. We show through simulation that the optimized power levels allow for successful decoding of heavily loaded systems with a large reduction in the convergence SNR.¶ We utilize EXIT chart analysis and a Viterbi search algorithm to derive the optimal decoding schedule for a multi component receiver/decoder. We show through simulations that decoding delay and complexity can be significantly reduced while maintaining BER performance through optimization of the decoding schedule.
109

Low Complexity Adaptive Iterative Receivers for Layered Space-Time Coded and CDMA Systems

Teekapakvisit, Chakree January 2007 (has links)
Doctor of Philosophy(PhD) / In this thesis, we propose and investigate promising approaches for interference mitigation in multiple input multiple output (MIMO) and code division multiple access (CDMA) systems. Future wireless communication systems will have to achieve high spectral efficiencies in order to meet increasing demands for huge data rates in emerging Internet and multimedia services. Multiuser detection and space diversity techniques are the main principles, which enable efficient use of the available spectrum. The main limitation for the applicability of the techniques in these practical systems is the high complexity of the optimal receiver structures. The research emphasis in this thesis is on the design of a low complexity interference suppression/cancellation algorithm. The most important result of our research is the novel design of interference cancellation receivers which are adaptive and iterative and which are of low computational complexity. We propose various adaptive iterative receivers, based on a joint adaptive iterative detection and decoding algorithm. The proposed receiver can effectively suppress and cancel co-channel interference from the adjacent antennas in the MIMO system with a low computation complexity. The proposed adaptive detector, based on the adaptive least mean square (LMS) algorithm, is investigated and compared with the non-adaptive iterative receiver. Since the LMS algorithm has a slow convergence speed, a partially filtered gradient LMS (PFGLMS) algorithm, which has a faster convergence speed, is proposed to improve the convergence speed of the system. The performance and computational complexity of this receiver are also considered. To further reduce the computational complexity, we apply a frequency domain adaptation technique into the adaptive iterative receivers. The system performance and complexity are investigated. It shows that the computational complexity of the frequency domain based receiver is significantly lower than that of the time domain based receiver with the same system performance. We also consider applications of MIMO techniques in CDMA systems, called MIMO-CDMA. In the MIMO-CDMA, the presence of the co-channel interference (CCI) from the adjacent antennas and multiple access interference (MAI) from other users significantly degrades the system performance. We propose an adaptive iterative receiver, which provides the capability to effectively suppress the interference and cancel the CCI from the adjacent antennas and the MAI from other users so as to improve the system performance. The proposed receiver structure is also based on a joint adaptive detection and decoding scheme. The adaptive detection scheme employs an adaptive normalized LMS algorithm operating in the time and frequency domain. We have investigated and compared their system performance and complexity. Moreover, the system performance is evaluated by using a semi-analytical approach and compared with the simulation results. The results show that there is an excellent agreement between the two approaches.
110

Residual Julia sets of Newton's maps and Smale's problems on the efficiency of Newton's method

Choi, Yan-yu. January 2006 (has links)
Thesis (M. Phil.)--University of Hong Kong, 2006. / Title proper from title frame. Also available in printed format.

Page generated in 0.0852 seconds