• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 870
  • 777
  • 184
  • 149
  • 107
  • 85
  • 32
  • 22
  • 20
  • 17
  • 15
  • 11
  • 11
  • 10
  • 9
  • Tagged with
  • 2607
  • 545
  • 217
  • 180
  • 166
  • 158
  • 140
  • 135
  • 124
  • 124
  • 122
  • 114
  • 102
  • 102
  • 95
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Characterization and Coding Techniques for Long-Haul Optical Telecommunication Systems

Ivkovic, Milos January 2007 (has links)
This dissertation is a study of error in long haul optical fiber systems and how to coupe with them. First we characterize error events occurring during transmission, then we determine lower bounds on information capacity (achievable information rates) and at the end we propose coding schemes for these systems.Existing approaches for obtaining probability density functions (PDFs) for pulse energy in long-haul optical fiber transmission systems rely on numerical simulations or analytical approximations. Numerical simulations make far tails of the PDFs difficult to obtain, while existing analytic approximations are often inaccurate, as they neglect nonlinear interaction between pulses and noise.Our approach combines the instanton method from statistical mechanics to model far tails of the PDFs, with numerical simulations to refine the middle part of the PDFs. We combine the two methods by using an orthogonal polynomial expansion constructed specifically for this problem. We demonstrate the approach on an example of a specific submarine transmission system.Once the channel is characterized estimating achievable information rates is done by a modification of a method originally proposed by Arnold and Pfitser. We give numerical results for the same optical transmission system (submarine system at transmission rate 40Gb/s).The achievable information rate varies with noise and length of the bit patterns considered (among other parameters). We report achievable numerical rates for systems with different noise levels, propagation distances and length of the bit patterns considered.We also propose two iterative decoding schemes suitable for high-speed long-haul optical transmission. One scheme is a modification of a method, originally proposed in the context of magnetic media, which incorporates the BCJR algorithm (to overcomeintersymbol interference) and Low-Density Parity-Check (LDPC) codes for additional error resilience. This is a ``soft decision scheme" -meaning that the decoding algorithm operates with probabilities(instead of binary values). The second scheme is ``hard decision" -it operates with binary values. This scheme is based on the maximum likelihood sequence detection-Viterbi algorithm and a hard decision"Gallager B" decoding algorithm for LDPC codes.
252

A Fast Hybrid Method for Analysis and Design of Photonic Structures

Rohani, Arash January 2006 (has links)
This thesis presents a very efficient hybrid method for analysis and design of optical and passive photonic devices. The main focus is on unbounded wave structures. This class of photonic systems are in general very large in terms of the wavelength of the driving optical sources. The size of the problem space makes the electromagnetic modelling of these structure a very challenging problem. Our approach and main contribution has been to combine or hybridize three methods that together can handle this class of photonic structures as a whole. <br /><br /> The basis of the hybrid method is a novel Gaussian Beam Tracing method GBT. Gaussian Beams (GB) are very suitable elementary functions for tracing and tracking purposes due to their finite extent and the fact that they are good approximations for actual laser beams. The GBT presented in this thesis is based on the principle of phase matching. This method can be used to model the reflection and refraction of Gaussian beams from general curved surfaces as long as the curvature of the surface is relatively small. It can also model wave propagation in free space. The developed GBT is extremely fast as it essentially uses simple algebraic equations to find the parameters of the reflected and refracted beams once the parameters of the incident beam is known. Therefore sections of the systems whose dimensions are large relative to the optical wavelength are simulated by the GBT method. <br /><br /> Fields entering a photonic system may not possess an exact Gaussian profile. For example if an aperture limits the input laser to the system, the field is no longer a GB. In these and other similar cases the field at some aperture plane needs to be expanded into a sum of GBs. Gabor expansion has been used for this purpose. This method allows any form of field distribution on a flat or curved surface to be expanded into a sum of GBs. The resultant GBs are then launched inside the system and tracked by GBT. Calculation of the coefficients of the Gabor series is very fast (1-2 minutes on a typical computer for most applications). <br /><br /> In some cases the dimensions or physical properties of structures do not allow the application of the GBT method. For example if the curvature of a surface is very large (or its radius of curvature is very small) or if the surface contains sharp edges or sub-wavelength dimensions GBT is no longer valid. In these cases we have utilized the Finite Difference Time Domain method (FDTD). FDTD is a rigorous and very accurate full wave electromagnetic solver. The time domain form of Maxwell's equations are discretized and solved. No matrix inversion is needed for this method. If the size of the structure that needs to be analyzed is large relative to the wavelength FDTD can become increasingly time consuming. Nevertheless once a structure is simulated using FDTD for a given input, the output is expanded using Gabor expansion and the resultant beams can then be efficiently propagated through any desired system using GBT. For example if a diffraction grating is illuminated by some source, once the reflection is found using FDTD, it can be propagated very efficiently through any kind of lens or prism (or other optical structures) using GBT. Therefore the overall computational efficiency of the hybrid method is very high compared to other methods.
253

A novel differential evolution algorithmic approach to transmission expansion planning

Sum-Im, Thanathip January 2009 (has links)
Nowadays modern electric power systems consist of large-scale and highly complex interconnected transmission systems, thus transmission expansion planning (TEP) is now a significant power system optimisation problem. The TEP problem is a large-scale, complex and nonlinear combinatorial problem of mixed integer nature where the number of candidate solutions to be evaluated increases exponentially with system size. The accurate solution of the TEP problem is essential in order to plan power systems in both an economic and efficient manner. Therefore, applied optimisation methods should be sufficiently efficient when solving such problems. In recent years a number of computational techniques have been proposed to solve this efficiency issue. Such methods include algorithms inspired by observations of natural phenomena for solving complex combinatorial optimisation problems. These algorithms have been successfully applied to a wide variety of electrical power system optimisation problems. In recent years differential evolution algorithm (DEA) procedures have been attracting significant attention from the researchers as such procedures have been found to be extremely effective in solving power system optimisation problems. The aim of this research is to develop and apply a novel DEA procedure directly to a DC power flow based model in order to efficiently solve the TEP problem. In this thesis, the TEP problem has been investigated in both static and dynamic form. In addition, two cases of the static TEP problem, with and without generation resizing, have also been investigated. The proposed method has achieved solutions with good accuracy, stable convergence characteristics, simple implementation and satisfactory computation time. The analyses have been performed within the mathematical programming environment of MATLAB using both DEA and conventional genetic algorithm (CGA) procedures and a detailed comparison has also been presented. Finally, the sensitivity of DEA control parameters has also been investigated.
254

Diffraction efficiency and aberrations of diffractive elements obtained from orthogonal expansion of the point spread function

Schwiegerling, Jim 27 September 2016 (has links)
The Point Spread Function (PSF) indirectly encodes the wavefront aberrations of an optical system and therefore is a metric of the system performance. Analysis of the PSF properties is useful in the case of diffractive optics where the wavefront emerging from the exit pupil is not necessarily continuous and consequently not well represented by traditional wavefront error descriptors such as Zernike polynomials. The discontinuities in the wavefront from diffractive optics occur in cases where step heights in the element are not multiples of the illumination wavelength. Examples include binary or N-step structures, multifocal elements where two or more foci are intentionally created or cases where other wavelengths besides the design wavelength are used. Here, a technique for expanding the electric field amplitude of the PSF into a series of orthogonal functions is explored. The expansion coefficients provide insight into the diffraction efficiency and aberration content of diffractive optical elements. Furthermore, this technique is more broadly applicable to elements with a finite number of diffractive zones, as well as decentered patterns.
255

Polynomial Expansion-Based Displacement Calculation on FPGA / Polynomexpansions-baserad förskjutningsberäkning på FPGA

Ehrenstråhle, Carl January 2016 (has links)
This thesis implements a system for calculating the displacement between two consecutive video frames. The displacement is calculated using a polynomial expansion-based algorithm. A unit tested bottoms-up approach is successfully used to design and implement the system. The designed and implemented system is thoroughly elaborated upon. The chosen algorithm and its computational details are presented to provide context to the implemented system. Some of the major issues and their impact on the system are discussed.
256

Comparaison des dimensions de l'arcade mandibulaire avant et après traitement orthodontique sans extraction

Cardona, Cédric January 2009 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal.
257

A flexible expansion algorithm for user-chosen abbreviations

Willis, Timothy Alan January 2008 (has links)
People with some types of motor disabilities who wish to generate text using a computer can find the process both fatiguing and time-consuming. These problems can be alleviated by reducing the quantity of keystrokes they must make, and one approach is to allow the user to enter shortened, abbreviated input, which is then re-expanded for them, by a program ‘filling in the gaps’. Word Prediction is one approach, but comes with drawbacks, one of which is the requirement that generally the user must type the first letters of their intended word, regardless of how unrepresentative they may consider those letters to be. Abbreviation Expansion allows the user to type reduced forms of many words in a way they feel represents them more effectively. This can be done by the omission of one or more letters, or the replacement of letter sequences with other, usually shorter, sequences. For instance, the word ‘hyphenate might be shortened to ‘yfn8’, by leaving out some letters and replacing the ‘ph’ and ‘ate’ with the shorter but phonetically similar ‘f’ and ‘8’. ‘Fixed Abbreviation Expansion’ requires the user to memorise a set of correspondences between abbreviations and the full words which they represent. While this enables useful keystroke savings to be made, these come alongside an increased cognitive load and potential for error. Where a word is encountered for which there is no preset abbreviation, or for which the user cannot remember one, keystroke savings may be lost. ‘Flexible Abbreviation Expansion’ allows the user to leave out whichever letters they feel to be ‘less differentiating' and jump straight ahead to type those they feel are most ‘salient’ and most characterise the word, choosing abbreviations ‘on the fly’. The need to memorise sets of correspondences is removed, as the user can be offered all candidates for which the abbreviation might be a representation, usually in small sets on screen. For useful savings to be made, the intended word must regularly be in the first or second set for quick selection, or the system might attempt to place the intended word at the very top of its list as frequently as possible. Thus it is important to generate and rank the candidates effectively, so that high probability words can be offered in a shortlist. Lower-ranking candidates can be offered in secondary lists which are not immediately displayed. This can reduce both the cognitive load and keystrokes needed for selection. The thesis addresses the task of reducing the number of keystrokes needed for text creation with a large, expressive vocabulary, using a new approach to flexible abbreviation expansion. To inform the solution, two empirical studies were run to gather letter-level statistics on the abbreviation methods of twenty-nine people, under different degrees of constriction (that is, different restrictions on the numbers of characters by which to reduce). These studies showed that with a small amount of priming, people would abbreviate in regular ways, both shared between users, and repeated through the data from an individual. Analysis showed the most common strategies to be vowel deletion, phonetic replacement, loss of double letters, and word truncation. Participants reduced the number of letters in their texts by between 25% (judged to maintain a high degree of comprehensibility) up to 40% (judged to be a maximum degree of brevity whilst still retaining comprehensibility). Informed by these results, an individual-word-level algorithm was developed. For each input abbreviation, a set of candidates is produced, ranked in such a way as to potentially save substantial keystrokes when used across a whole text. A variety of statistical and linguistic techniques, often also used in spelling checking and correction, are used to rank them so that the most probable will be easiest to select, and with fewest keystrokes. The algorithm works at the level of the individual word, without looking at surrounding context. Evaluation of the algorithm demonstrated that it outperforms its nearest comparable alternative, of ranking word lists exclusively by word frequency. The evaluation was performed on the data from the second empirical study, using vocabulary sizes of 2-, 10-, 20- and 30-thousand words. The results show the algorithm to be of potential benefit for use as a component of a flexible abbreviation expansion system. Even with the overhead of selection of the intended word, useful keystroke savings could still be attained. It is envisaged that such a system could be implemented on many platforms, including as part of an AAC (Augmentative and Alternative Communication) device, and an email system on a standard PC, thus making typed communication for the user group more comfortable and expansive.
258

Evidence of left ventricular wall movement actively decelerting aortic

Page, Chloe May January 2009 (has links)
Efficient function of the left ventricle (LV) is achieved by coherent behaviour of its circumferential and longitudinal myocardial components. Little was known about the direct association between the long and minor axis velocities and the overall haemodynamics generated by ventricular systolic function such as aortic waves. The forward running expansion wave (FEW) during late systole contains important information about the condition of the LV and its interaction with the arterial system. The aim of this thesis was to underpin the mechanics and timing of the LV wall velocities, which are associated with the deceleration of flow. Both invasive and noninvasive data have been analysed in canines and humans and the following conclusions can be drawn. LV long axis peak shortening velocity lags consistently behind the minor axis, representing a degree of normal asynchrony. The FEW is seen to have a slow onset before a rapid increase in energy. The slow onset corresponds with the time that the long axis reaches its peak velocity of shortening. After both axes reach their respective maximum shortening velocity they continue to contract, although at a slow steady velocity until late ejection when there is a sudden simultaneous change of shortening velocity of both axes. This time corresponds with peak aortic pressure and the rapid increase in energy of the FEW. The time that the minor axis reaches its maximum velocity of shortening interestingly coincides with the arrival of the reflected wave at the LV during mid-systole. During canine aortic manipulation through the introduction of total occlusions along the aorta, the sequence of events observed in control conditions remains unchanged. In humans both LV wall movement and carotid wave intensity can be measured successfully using non-invasive methods. The FEW is generated when the last long axis segment begins to slow. The minor axis begins to slow before this time and corresponds to the time of peak aortic flow.
259

Three-Dimensional Photographic Evaluation of Immediate Soft Tissue Changes Following Rapid Maxillary Expansion

Granillo, Nathan 06 June 2011 (has links)
The skeletal and dental changes associated with rapid maxillary expansion (RME) are well documented. Effects on the soft tissues and the potential impact on facial esthetics have not been well researched. The purpose of this study was to evaluate immediate changes in facial soft tissues as a result of RME by comparing threedimensional digital photogrammetric images before and after RME treatment. The 3dMDface System was used to obtain photographic images of 21 patients (mean age = 11.8 years) before and after RME treatment for transverse maxillary deficiency. A control group of 13 patients (mean age = 12.7 years) also had two images taken at a similar time interval. Mean expansion was 6.5 mm in the RME patients. Intercanthal distance, nose width, and intercommissural width changed significantly in the RME patients from T0 to T1 (P = 0.011, P = 0.050, and P = 0.003, respectively). Intercommissural width, however, was the only measure that significantly changed as compared with the control group (P = 0.041). Changes in intercanthal distance and nose width were significantly related to the amount of expansion achieved (R2 = 0.428, P = 0.0013 and R2 = 0.501, P = 0.0003, respectively).
260

Expansion multi-périodes d'un réseau local de télécommunications

Smires, Ali January 2004 (has links)
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.

Page generated in 0.0317 seconds