• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 12
  • 12
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Diffractive optical elements : design and fabrication issues

Blair, Paul January 1995 (has links)
No description available.
2

Calibration of Laser Triangulating Cameras in Small Fields of View / Kalibrering av lasertriangulerande 3D-kamera för användning i små synfält

Rydström, Daniel January 2013 (has links)
A laser triangulating camera system projects a laser line onto an object to create height curveson the object surface. By moving the object, height curves from different parts of the objectcan be observed and combined to produce a three dimensional representation of the object.The calibration of such a camera system involves transforming received data to get real worldmeasurements instead of pixel based measurements. The calibration method presented in this thesis focuses specifically on small fields ofview. The goal is to provide an easy to use and robust calibration method that can complementalready existing calibration methods. The tool should get as good measurementsin metric units as possible, while still keeping complexity and production costs of the calibrationobject low. The implementation uses only data from the laser plane itself making itusable also in environments where no external light exist. The proposed implementation utilises a complete scan of a three dimensional calibrationobject and returns a calibration for three dimensions. The results of the calibration havebeen evaluated against synthetic and real data.
3

Accelerating computational diffusion MRI using Graphics Processing Units

Fernandez, Moises Hernandez January 2017 (has links)
Diffusion magnetic resonance imaging (dMRI) allows uniquely the study of the human brain non-invasively and in vivo. Advances in dMRI offer new insight into tissue microstructure and connectivity, and the possibility of investigating the mechanisms and pathology of neurological diseases. The great potential of the technique relies on indirect inference, as modelling frameworks are necessary to map dMRI measurements to neuroanatomical features. However, this mapping can be computationally expensive, particularly given the trend of increasing dataset sizes and/or the increased complexity in biophysical modelling. Limitations on computing can restrict data exploration and even methodology development. A step forward is to take advantage of the power offered by recent parallel computing architectures, especially Graphics Processing Units (GPUs). GPUs are massive parallel processors that offer trillions of floating point operations per second, and have made possible the solution of computationally intensive scientific problems that were intractable before. However, they are not inherently suited for all types of problems, and bespoke computational frameworks need to be developed in many cases to take advantage of their full potential. In this thesis, we propose parallel computational frameworks for the analysis of dMRI using GPUs within different contexts. We show that GPU-based designs can offer accelerations of more than two orders of magnitude for a number of scientific computing tasks with different parallelisability requirements, ranging from biophysical modelling for tissue microstructure estimation to white matter tractography for connectome generation. We develop novel and efficient GPUaccelerated solutions, including a framework that automatically generates GPU parallel code from a user-specified biophysical model. We also present a parallel GPU framework for performing probabilistic tractography and generating whole-brain connectomes. Throughout the thesis, we discuss several strategies for parallelising scientific applications, and we show the great potential of the accelerations obtained, which change the perspective of what is computationally feasible.
4

Damage Detection In Beam-like Structures Via Combined Genetic Algorithm And Non-linear Optimisation

Aktasoglu, Seyfullah 01 February 2012 (has links) (PDF)
In this study, a combined genetic algorithm and non-linear optimisation system is designed and used in the identification of structural damage of a cantilever isotropic beam regarding its location and severity. The vibration-based features, both natural frequencies (i.e. eigenvalues) and displacement mode shapes (i.e. eigenvectors) of the structure in the first two out of plane bending modes, are selected as damage features for various types of damage comprising saw-cut and impact. For this purpose, commercial finite element modelling (FEM) and analysis software Msc. Patran/Nastran&reg / is used to obtain the aforementioned features from intact and damaged structures. Various damage scenarios are obtained regarding saw-cut type damage which is modelled as change in the element thicknesses and impact type damage which is modelled as a reduction of the elastic modulus of the elements in the finite element models. These models are generated by using both 1-D bar elements and 2-D shell type elements in Msc. Patran&reg / and then normal mode analyses are performed in order to extract element stiffness and mass matrices by using Msc. Nastran&reg / . Sensitivity matrices are then created by changing the related properties (i.e. reduction in elastic modulus and thickness) of the individual elements via successive normal mode analyses. The obtained sensitivity matrices are used as coefficients of element stiffness and/or mass matrices to construct global stiffness and/or mass matrices respectively. Following this, the residual force vectors obtained for different damage scenarios are minimised via a combined genetic algorithm and non-linear optimisation system to identify damage location and severity. This minimisation procedure is performed in two steps. First, the algorithm tries to minimise residual force vector (RFV) by only changing element stiffness matrices by aiming to detect impact type damage, as elastic modulus change is directly related to stiffness matrix. Secondly, it performs a minimisation over RFV by changing both element stiffness and mass matrices which aims to detect saw-cut type damage where thickness change is a function of both stiffness and mass matrices. The prediction of the damage type is then made by comparing the objective function value of these two steps. The lowest value (i.e. the fittest) indicates the damage type. The results of the minimisation also provide value of intactness where one representing intact and any value lower than one representing damage severity. The element related to that particular intactness value indicates the location of the damage on the structure. In case of having intactness values which are lower than one in value at various locations shows the existence of multi damage cases and provides their corresponding severities. The performance of the proposed combined genetic algorithm and non-linear optimisation system is tested on various damage scenarios created at different locations with different severities for both single and multi damage cases. The results indicate that the method used in this study is an effective one in the determination of type, severity and location of the damage in beam-like structures.
5

Lastgetriebene Validierung Dienstbereitstellender Systeme / Load-Driven Validation of Service Providing Systems

Caspar, Mirko 07 January 2014 (has links) (PDF)
Mit steigender Komplexität heterogener, verteilter Systeme nehmen auch die Anforderungen an deren Validierung zu. In dieser Arbeit wird ein Konzept vorgestellt, mit dem eine bestimmte Klasse komplexer Systeme, so genannte Dienstbereitstellende Systeme, durch automatisiertes Testen validiert werden kann. Mit Hilfe heterogener Klienten, bspw. eingebetteter Systeme, wird die Systemfunktionalität getestet. Hierzu wird das zu testende System auf die nach außen zur Verfügung gestellten Dienste reduziert und die Nutzung dieser Dienste durch Klienten mit einer Last quantifiziert. Eine Validierung wird durch die Vorgabe zeitlich veränderlicher Lasten für jeden Dienst definiert. Diese Lasten werden zielgerichtet den verfügbaren Klienten zugeteilt und durch diese im zu testenden System erzeugt. Zur praktikablen Anwendung dieses Konzeptes ist eine Automatisierung des Validierungsprozesses notwendig. In der Arbeit wird die Architektur einer Testbench vorgestellt, die zum einen die Heterogenität der Klienten berücksichtigt und zum anderen Einflüsse durch die Dynamik der Klienten während der Laufzeit der Validierung ausgleicht. Das hierbei zu lösende algorithmische Problem der Dynamischen Testpartitionierung wird ebenso definiert, wie ein Modell zur Beschreibung aller notwendigen Parameter. Die Testpartitionierung kann mittels einer eigens entwickelten Heuristik in Polynomialzeit gelöst werden. Zur Bestimmung der Leistungsfähigkeit des entwickelten Verfahrens wird die Heuristik aufwendigen Untersuchungen unterzogen. Am Beispiel eines zu testenden Mobilfunknetzwerkes wird die beschriebene Testbench umgesetzt und Kernparameter mittels Simulation ermittelt. Das Ergebnis dieser Arbeit ist ein Konzept zur Systemvalidierung, das generisch auf jede Art von dienstbereitstellenden Systemen angewandt werden kann und damit zur Verbesserung des Entwicklungsprozesses von komplexen verteilten Systemen beiträgt.
6

Adaptive crosstalk cancellation and Lattice aided detection in multi-user communications

Mandar Gujrathi Unknown Date (has links)
Digital subscriber lines (DSL) have revolutionised the provision of high speed data over the ‘last mile’. Subscribers demand even more bandwidth and the penetration of the service is now nearly universal. While it is feasible to provide improved broadband services on the new very high speed DSL, such as VDSL2/3, one of the greatest challenges to further improvements in speed is the problem of crosstalk. Operating over the unused higher frequencies of the twisted pair network, this technology is subjected to electromagnetic coupling among the wires, limiting the DSL data rate and service reach. Crosstalk suppression methods such as zero-forcing or decision feedback mainly use block processing. However, to cope with the time-varying VDSL environment huge computational costs can be incurred. In contrast, adaptive processing approaches are much simpler and are more beneficial to track such a channel environment. An adaptive canceller uses a training sequence and the convergence speed depends on the number of crosstalk coefficients it has to estimate. In a populated DSL binder, only a few of the crosstalking neighbours to a particular user are significant. With the aim to reduce the computational complexity in such environments, this thesis introduces the concept of detection-guided adaptive crosstalk cancellation for DSL. We propose a least-squares test feature to detect and concentrate the adaptation only on the dominant crosstalking coefficients. In comparison to conventional adaptive cancellers, the cancellers proposed in this thesis demonstrate early convergence. Thus, by incorporating the test feature, these cancellers have to detect only the most significant canceller coefficients and therefore, the length of the training sequence is reduced. Together with enhanced adaptive cancellation with a low run-time complexity and improved convergence, the greatest advantage obtained here is in the bandwidth efficiency. While enhanced adaptive cancellation is a bandwidth-efficient approach, the frequent re-transmission of training sequences may still be required for a rapidly changing VDSL channel. Again, this can be a disadvantage in terms of bandwidth consumption. To overcome this difficulty, we propose fast-converging unsupervised cancellers with an aim to improve the bandwidth efficiency by not transmitting a training sequence. An added advantage obtained here is that this would enable Internet service providers to include multiple or improved broadband services within a single subscription. Certain properties of the DSL channel ensure the communication channel is properly conditioned. This ensures the basis vectors of the channel matrix are near-orthogonal and hence, the linear cancellers, such as zero-forcing perform near-optimally. However, this is not the case with wireless channels. We investigate user detection in wireless channels using the principle of lattice reduction. User detection can also be seen as a search for the closest vector point in the lattice of received symbols. Though a maximum likelihood (ML) detector facilitates optimal user-detection, it has exponential complexity. We identify that the closest vector problem can be cast as a non-linear optimisation problem. Using the periodicity of the maximum likelihood function, we first present a novel algorithm that approximates the ML function using the Taylor series expansion of a suitable cosine function. With the aim of minimising the approximation error, we represent the ML function as a Fourier Series expansion and later, propose another approximation using Jacobi theta functions. We study the performance of these approximations when subjected to a suitable unconstrained optimisation algorithm. Through simulations, we demonstrate that the newly-developed approximations perform better than the conventional cancellers, close to the ML and, importantly, converging in polynomial time.
7

Adaptive crosstalk cancellation and Lattice aided detection in multi-user communications

Mandar Gujrathi Unknown Date (has links)
Digital subscriber lines (DSL) have revolutionised the provision of high speed data over the ‘last mile’. Subscribers demand even more bandwidth and the penetration of the service is now nearly universal. While it is feasible to provide improved broadband services on the new very high speed DSL, such as VDSL2/3, one of the greatest challenges to further improvements in speed is the problem of crosstalk. Operating over the unused higher frequencies of the twisted pair network, this technology is subjected to electromagnetic coupling among the wires, limiting the DSL data rate and service reach. Crosstalk suppression methods such as zero-forcing or decision feedback mainly use block processing. However, to cope with the time-varying VDSL environment huge computational costs can be incurred. In contrast, adaptive processing approaches are much simpler and are more beneficial to track such a channel environment. An adaptive canceller uses a training sequence and the convergence speed depends on the number of crosstalk coefficients it has to estimate. In a populated DSL binder, only a few of the crosstalking neighbours to a particular user are significant. With the aim to reduce the computational complexity in such environments, this thesis introduces the concept of detection-guided adaptive crosstalk cancellation for DSL. We propose a least-squares test feature to detect and concentrate the adaptation only on the dominant crosstalking coefficients. In comparison to conventional adaptive cancellers, the cancellers proposed in this thesis demonstrate early convergence. Thus, by incorporating the test feature, these cancellers have to detect only the most significant canceller coefficients and therefore, the length of the training sequence is reduced. Together with enhanced adaptive cancellation with a low run-time complexity and improved convergence, the greatest advantage obtained here is in the bandwidth efficiency. While enhanced adaptive cancellation is a bandwidth-efficient approach, the frequent re-transmission of training sequences may still be required for a rapidly changing VDSL channel. Again, this can be a disadvantage in terms of bandwidth consumption. To overcome this difficulty, we propose fast-converging unsupervised cancellers with an aim to improve the bandwidth efficiency by not transmitting a training sequence. An added advantage obtained here is that this would enable Internet service providers to include multiple or improved broadband services within a single subscription. Certain properties of the DSL channel ensure the communication channel is properly conditioned. This ensures the basis vectors of the channel matrix are near-orthogonal and hence, the linear cancellers, such as zero-forcing perform near-optimally. However, this is not the case with wireless channels. We investigate user detection in wireless channels using the principle of lattice reduction. User detection can also be seen as a search for the closest vector point in the lattice of received symbols. Though a maximum likelihood (ML) detector facilitates optimal user-detection, it has exponential complexity. We identify that the closest vector problem can be cast as a non-linear optimisation problem. Using the periodicity of the maximum likelihood function, we first present a novel algorithm that approximates the ML function using the Taylor series expansion of a suitable cosine function. With the aim of minimising the approximation error, we represent the ML function as a Fourier Series expansion and later, propose another approximation using Jacobi theta functions. We study the performance of these approximations when subjected to a suitable unconstrained optimisation algorithm. Through simulations, we demonstrate that the newly-developed approximations perform better than the conventional cancellers, close to the ML and, importantly, converging in polynomial time.
8

Adaptive crosstalk cancellation and Lattice aided detection in multi-user communications

Mandar Gujrathi Unknown Date (has links)
Digital subscriber lines (DSL) have revolutionised the provision of high speed data over the ‘last mile’. Subscribers demand even more bandwidth and the penetration of the service is now nearly universal. While it is feasible to provide improved broadband services on the new very high speed DSL, such as VDSL2/3, one of the greatest challenges to further improvements in speed is the problem of crosstalk. Operating over the unused higher frequencies of the twisted pair network, this technology is subjected to electromagnetic coupling among the wires, limiting the DSL data rate and service reach. Crosstalk suppression methods such as zero-forcing or decision feedback mainly use block processing. However, to cope with the time-varying VDSL environment huge computational costs can be incurred. In contrast, adaptive processing approaches are much simpler and are more beneficial to track such a channel environment. An adaptive canceller uses a training sequence and the convergence speed depends on the number of crosstalk coefficients it has to estimate. In a populated DSL binder, only a few of the crosstalking neighbours to a particular user are significant. With the aim to reduce the computational complexity in such environments, this thesis introduces the concept of detection-guided adaptive crosstalk cancellation for DSL. We propose a least-squares test feature to detect and concentrate the adaptation only on the dominant crosstalking coefficients. In comparison to conventional adaptive cancellers, the cancellers proposed in this thesis demonstrate early convergence. Thus, by incorporating the test feature, these cancellers have to detect only the most significant canceller coefficients and therefore, the length of the training sequence is reduced. Together with enhanced adaptive cancellation with a low run-time complexity and improved convergence, the greatest advantage obtained here is in the bandwidth efficiency. While enhanced adaptive cancellation is a bandwidth-efficient approach, the frequent re-transmission of training sequences may still be required for a rapidly changing VDSL channel. Again, this can be a disadvantage in terms of bandwidth consumption. To overcome this difficulty, we propose fast-converging unsupervised cancellers with an aim to improve the bandwidth efficiency by not transmitting a training sequence. An added advantage obtained here is that this would enable Internet service providers to include multiple or improved broadband services within a single subscription. Certain properties of the DSL channel ensure the communication channel is properly conditioned. This ensures the basis vectors of the channel matrix are near-orthogonal and hence, the linear cancellers, such as zero-forcing perform near-optimally. However, this is not the case with wireless channels. We investigate user detection in wireless channels using the principle of lattice reduction. User detection can also be seen as a search for the closest vector point in the lattice of received symbols. Though a maximum likelihood (ML) detector facilitates optimal user-detection, it has exponential complexity. We identify that the closest vector problem can be cast as a non-linear optimisation problem. Using the periodicity of the maximum likelihood function, we first present a novel algorithm that approximates the ML function using the Taylor series expansion of a suitable cosine function. With the aim of minimising the approximation error, we represent the ML function as a Fourier Series expansion and later, propose another approximation using Jacobi theta functions. We study the performance of these approximations when subjected to a suitable unconstrained optimisation algorithm. Through simulations, we demonstrate that the newly-developed approximations perform better than the conventional cancellers, close to the ML and, importantly, converging in polynomial time.
9

Une commande neuronale adaptative basée sur des émulateurs neuronal et multimodèle pour les systèmes non linéaires MIMO et SIMO / An adaptative neural control based on neural and multimodel emulators for MIMO and SIMO non linear systems

Bahri, Nesrine 30 September 2015 (has links)
La porosité d'une plaque composite carbone / époxy de type RTM est connue par tomographie X. Une méthode de détermination de cette porosité par mesure de l'atténuation des ondes longitudinales à travers l'épaisseur de cette plaque est proposée. Ces mesures sont effectuées sur des surfaces de dimensions variables (quelques cm2 à quelques mm2) et permettent l’obtention de cartographies. Une correspondance porosité (tomo X) – atténuation (onde US) est déduite et analysée en fonction de la structure du matériau composite. Dans chaque cas, on estime la qualité des relations obtenues et on en déduit les limites de validité de la correspondance porosité-atténuation. Des premiers résultats de tomographie acoustiques sont obtenus. / The porosity of a composite plate in carbon / epoxy of type RTM is known by used of tomography X. A method of determination of this porosity by measure of the mitigation of the longitudinal waves through the thickness of this kind of plate is proposed. These measures are made on surfaces of different sizes (from some cm2 to some mm2) and allow the obtaining of cartographies. A correspondence porosity (tomo X) - Mitigation (US wave) is deducted and analyzed according to the structure of the composite material. In every case, we estimate the quality of the obtained relations and we deduct the limits of validity of the correspondence between porosity and mitigation. First results of acoustic tomography are obtained.
10

Amélioration de la qualité d'expérience vidéo en combinant streaming adaptif, caching réseau et multipath / Combining in-network caching, HTTP adaptive streaming and multipath to improve video quality of experience

Poliakov, Vitalii 11 December 2018 (has links)
Le trafic vidéo s’est considérablement accru et est prévu de doubler pour représenter 82% du trafic Internet d’ici 2021. Une telle croissance surcharge les fournisseurs de services Internet (ISP), nuisant à la Qualité d’Expérience (QoE) perçue par les utilisateurs. Cette thèse vise à améliorer la QoE des utilisateurs de streaming vidéo sans hypothèse de changement d’infrastructure physique des opérateurs. Pour cela, nous combinons les technologies de caching réseau, de streaming HTTP adaptatif (HAS), et de transport multipath. Nous explorons d’abord l’interaction entre HAS et caching, pour montrer que les algorithmes d’adaptation de qualité vidéo ont besoin de savoir qu’il y a un cache et ce qui y est stocké, et proposons des algorithmes bénéficiant de cette connaissance. Concluant sur la difficulté d’obtenir la connaissance de l’état du cache, nous étudions ensuite un système de distribution vidéo à large échelle, où les caches sont représentés par un réseau de distribution du contenu (CDN). Un CDN déploie des caches à l’intérieur des réseaux des ISP, et dispose de ses propres serveurs externes. L’originalité du problème vient de l’hypothèse que nous faisons que l’utilisateur est simultanément connecté à 2 ISP. Ceci lui permet d’accéder en multipath aux serveurs externes aux ISP (pouvant ainsi accroître le débit mais chargeant plus les ISP), ou streamer le contenu depuis un cache plus proche mais avec un seul chemin. Ce désaccord entre les objectifs du CDN et de l’ISP conduit à des performances sous-optimales. Nous développons un schéma de collaboration entre ISP et CDN qui permet de nous rapprocher de l’optimal dans certains cas, et discutons l’implémentation pratique. / Video traffic volume grew considerably in recent years and is forecasted to reach 82% of the total Internet traffic by 2021, doubling its net volume as compared to today. Such growth overloads Internet Service Providers' networks (ISPs), which negatively impacts users' Quality of Experience (QoE). This thesis attempts to tackle the problem of improving users' video QoE without relying on network upgrades. For this, we have chosen to combine such technologies as in-network caching, HTTP Adaptive Streaming (HAS), and multipath data transport. We start with exploration of interaction between HAS and caching; we confirm the need of cache-awareness in quality adaptation algorithms and propose such an extension to a state-of-the-art optimisation-based algorithm. Concluding on the difficulty of achieving cache-awareness, we take a step back to study a video delivery system on a large scale, where in-network caches are represented by Content Delivery Networks (CDNs). They deploy caches inside ISPs and dispose of their own outside video servers. As a novelty, we consider users to have a simultaneous connectivity to several ISP networks. This allows video clients either to access outside multipath servers with aggregate bandwidth (which may increase their QoE, but will also bring more traffic into ISP), or stream their content from a closer cache through only single connectivity (bringing less traffic into ISP). This disagreement in ISP and CDN objectives leads to suboptimal system performance. In response to this, we develop a collaboration scheme between two actors, performance of which can approach optimal boundary for certain settings, and discuss its practical implementation.

Page generated in 0.1549 seconds