• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 549
  • 94
  • 93
  • 91
  • 30
  • 22
  • 17
  • 15
  • 14
  • 12
  • 12
  • 9
  • 9
  • 8
  • 5
  • Tagged with
  • 1126
  • 419
  • 167
  • 149
  • 114
  • 108
  • 105
  • 94
  • 86
  • 85
  • 84
  • 71
  • 70
  • 64
  • 63
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
621

Simulation Based Investigation Of An Improvement For Faster Sip Re-registration

Tanriverdi, Eda 01 July 2004 (has links) (PDF)
ABSTRACT SIMULATION BASED INVESTIGATION OF AN IMPROVEMENT FOR FASTER SIP RE-REGISTRATION TANRIVERDi, Eda M.Sc., Department of Electrical and Electronics Engineering Supervisor: Prof. Dr. Semih BiLGEN July 2004, 78 pages In this thesis, the Session Initiation Protocol (SIP) is studied and an improvement for faster re-registration is proposed. This proposal, namely the &ldquo / registration &ndash / activation&rdquo / , is investigated with a simulation prepared using OPNET. The literature about wireless mobile networks and SIP mobility is reviewed. Conditions for an effective mobile SIP network simulation are designed using message sequence charts. The testbed in [1] formed by Dutta et. al. that has been used to observe SIP handover performance is simulated and validated. The mobile nodes, SIP Proxy v servers, DHCP servers and network topology are simulated on &ldquo / OPNET Modeler Radio&rdquo / . Once the simulation is proven to be valid, the &ldquo / registration &ndash / activation&rdquo / is implemented. Different simulation scenarios are set up and run, with different mobile node speeds and different numbers of mobile nodes. The results show that the re-registration delay is improved by applying the &ldquo / registration &ndash / activation&rdquo / but the percentage of improvement depends on the improvement in the database access delay in the SIP Proxy server.
622

A parallel geometric multigrid method for finite elements on octree meshes applied to elastic image registration

Sampath, Rahul Srinivasan 24 June 2009 (has links)
The first component of this work is a parallel algorithm for constructing non-uniform octree meshes for finite element computations. Prior to octree meshing, the linear octree data structure must be constructed and a constraint known as "2:1 balancing" must be enforced; parallel algorithms for these two subproblems are also presented. The second component of this work is a parallel matrix-free geometric multigrid algorithm for solving elliptic partial differential equations (PDEs) using these octree meshes. The last component of this work is a parallel multiscale Gauss Newton optimization algorithm for solving the elastic image registration problem. The registration problem is discretized using finite elements on octree meshes and the parallel geometric multigrid algorithm is used as a preconditioner in the Conjugate Gradient (CG) algorithm to solve the linear system of equations formed in each Gauss Newton iteration. Several ideas were used to reduce the overhead for constructing the octree meshes. These include (a) a way to lower communication costs by reducing the number of synchronizations and reducing the communication message size, (b) a way to reduce the number of searches required to build element-to-vertex mappings, and (c) a compression scheme to reduce the memory footprint of the entire data structure. To our knowledge, the multigrid algorithm presented in this work is the only matrix-free multiplicative geometric multigrid implementation for solving finite element equations on octree meshes using thousands of processors. The proposed registration algorithm is also unique; it is a combination of many different ideas: adaptivity, parallelism, fast optimization algorithms, and fast linear solvers. All the algorithms were implemented in C++ using the Message Passing Interface (MPI) standard and were built on top of the PETSc library from Argonne National Laboratory. The multigrid implementation has been released as an open source software: Dendro. Several numerical experiments were performed to test the performance of the algorithms. These experiments were performed on a variety of NSF TeraGrid platforms. Our largest run was a highly-nonuniform, 8-billion-unknown, elasticity calculation on 32,000 processors.
623

Least-squares optimal interpolation for direct image super-resolution : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Engineering at Massey University, Palmerston North, New Zealand

Gilman, Andrew January 2009 (has links)
Image super-resolution aims to produce a higher resolution representation of a scene from an ensemble of low-resolution images that may be warped, aliased, blurred and degraded by noise. There are a variety of methods for performing super-resolution described in the literature, and in general they consist of three major steps: image registration, fusion and deblurring. This thesis proposes a novel method of performing the first two of these steps. The ultimate aim of image super-resolution is to produce a higher-quality image that is visually clearer, sharper and contains more detail than the individual input images. Machine algorithms can not assess images qualitatively and typically use a quantitative error criterion, often least-squares. This thesis aims to optimise leastsquares directly using a fast method, in particular one that can be implemented using linear filters; hence, a closed-form solution is required. The concepts of optimal interpolation and resampling are derived and demonstrated in practice. Optimal filters optimised on one image are shown to perform nearoptimally on other images, suggesting that common image features, such as stepedges, can be used to optimise a near-optimal filter without requiring the knowledge of the ground-truth output. This leads to the construction of a pulse model, which is used to derive filters for resampling non-uniformly sampled images that result from the fusion of registered input images. An experimental comparison shows that a 10th order pulse model-based filter outperforms a number of methods common in the literature. The use of optimal interpolation for image registration linearises an otherwise nonlinear problem, resulting in a direct solution. Experimental analysis is used to show that optimal interpolation-based registration outperforms a number of existing methods, both iterative and direct, at a range of noise levels and for both heavily aliased images and images with a limited degree of aliasing. The proposed method offers flexibility in terms of the size of the region of support, offering a good trade-off in terms of computational complexity and accuracy of registration. Together, optimal interpolation-based registration and fusion are shown to perform fast, direct and effective super-resolution.
624

Least-squares optimal interpolation for direct image super-resolution : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Engineering at Massey University, Palmerston North, New Zealand

Gilman, Andrew January 2009 (has links)
Image super-resolution aims to produce a higher resolution representation of a scene from an ensemble of low-resolution images that may be warped, aliased, blurred and degraded by noise. There are a variety of methods for performing super-resolution described in the literature, and in general they consist of three major steps: image registration, fusion and deblurring. This thesis proposes a novel method of performing the first two of these steps. The ultimate aim of image super-resolution is to produce a higher-quality image that is visually clearer, sharper and contains more detail than the individual input images. Machine algorithms can not assess images qualitatively and typically use a quantitative error criterion, often least-squares. This thesis aims to optimise leastsquares directly using a fast method, in particular one that can be implemented using linear filters; hence, a closed-form solution is required. The concepts of optimal interpolation and resampling are derived and demonstrated in practice. Optimal filters optimised on one image are shown to perform nearoptimally on other images, suggesting that common image features, such as stepedges, can be used to optimise a near-optimal filter without requiring the knowledge of the ground-truth output. This leads to the construction of a pulse model, which is used to derive filters for resampling non-uniformly sampled images that result from the fusion of registered input images. An experimental comparison shows that a 10th order pulse model-based filter outperforms a number of methods common in the literature. The use of optimal interpolation for image registration linearises an otherwise nonlinear problem, resulting in a direct solution. Experimental analysis is used to show that optimal interpolation-based registration outperforms a number of existing methods, both iterative and direct, at a range of noise levels and for both heavily aliased images and images with a limited degree of aliasing. The proposed method offers flexibility in terms of the size of the region of support, offering a good trade-off in terms of computational complexity and accuracy of registration. Together, optimal interpolation-based registration and fusion are shown to perform fast, direct and effective super-resolution.
625

Least-squares optimal interpolation for direct image super-resolution : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Engineering at Massey University, Palmerston North, New Zealand

Gilman, Andrew January 2009 (has links)
Image super-resolution aims to produce a higher resolution representation of a scene from an ensemble of low-resolution images that may be warped, aliased, blurred and degraded by noise. There are a variety of methods for performing super-resolution described in the literature, and in general they consist of three major steps: image registration, fusion and deblurring. This thesis proposes a novel method of performing the first two of these steps. The ultimate aim of image super-resolution is to produce a higher-quality image that is visually clearer, sharper and contains more detail than the individual input images. Machine algorithms can not assess images qualitatively and typically use a quantitative error criterion, often least-squares. This thesis aims to optimise leastsquares directly using a fast method, in particular one that can be implemented using linear filters; hence, a closed-form solution is required. The concepts of optimal interpolation and resampling are derived and demonstrated in practice. Optimal filters optimised on one image are shown to perform nearoptimally on other images, suggesting that common image features, such as stepedges, can be used to optimise a near-optimal filter without requiring the knowledge of the ground-truth output. This leads to the construction of a pulse model, which is used to derive filters for resampling non-uniformly sampled images that result from the fusion of registered input images. An experimental comparison shows that a 10th order pulse model-based filter outperforms a number of methods common in the literature. The use of optimal interpolation for image registration linearises an otherwise nonlinear problem, resulting in a direct solution. Experimental analysis is used to show that optimal interpolation-based registration outperforms a number of existing methods, both iterative and direct, at a range of noise levels and for both heavily aliased images and images with a limited degree of aliasing. The proposed method offers flexibility in terms of the size of the region of support, offering a good trade-off in terms of computational complexity and accuracy of registration. Together, optimal interpolation-based registration and fusion are shown to perform fast, direct and effective super-resolution.
626

Least-squares optimal interpolation for direct image super-resolution : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Engineering at Massey University, Palmerston North, New Zealand

Gilman, Andrew January 2009 (has links)
Image super-resolution aims to produce a higher resolution representation of a scene from an ensemble of low-resolution images that may be warped, aliased, blurred and degraded by noise. There are a variety of methods for performing super-resolution described in the literature, and in general they consist of three major steps: image registration, fusion and deblurring. This thesis proposes a novel method of performing the first two of these steps. The ultimate aim of image super-resolution is to produce a higher-quality image that is visually clearer, sharper and contains more detail than the individual input images. Machine algorithms can not assess images qualitatively and typically use a quantitative error criterion, often least-squares. This thesis aims to optimise leastsquares directly using a fast method, in particular one that can be implemented using linear filters; hence, a closed-form solution is required. The concepts of optimal interpolation and resampling are derived and demonstrated in practice. Optimal filters optimised on one image are shown to perform nearoptimally on other images, suggesting that common image features, such as stepedges, can be used to optimise a near-optimal filter without requiring the knowledge of the ground-truth output. This leads to the construction of a pulse model, which is used to derive filters for resampling non-uniformly sampled images that result from the fusion of registered input images. An experimental comparison shows that a 10th order pulse model-based filter outperforms a number of methods common in the literature. The use of optimal interpolation for image registration linearises an otherwise nonlinear problem, resulting in a direct solution. Experimental analysis is used to show that optimal interpolation-based registration outperforms a number of existing methods, both iterative and direct, at a range of noise levels and for both heavily aliased images and images with a limited degree of aliasing. The proposed method offers flexibility in terms of the size of the region of support, offering a good trade-off in terms of computational complexity and accuracy of registration. Together, optimal interpolation-based registration and fusion are shown to perform fast, direct and effective super-resolution.
627

Part-based recognition of 3-D objects with application to shape modeling in hearing aid manufacturing

Zouhar, Alexander 12 January 2016 (has links) (PDF)
In order to meet the needs of people with hearing loss today hearing aids are custom designed. Increasingly accurate 3-D scanning technology has contributed to the transition from conventional production scenarios to software based processes. Nonetheless, there is a tremendous amount of manual work involved to transform an input 3-D surface mesh of the outer ear into a final hearing aid shape. This manual work is often cumbersome and requires lots of experience which is why automatic solutions are of high practical relevance. This work is concerned with the recognition of 3-D surface meshes of ear implants. In particular we present a semantic part-labeling framework which significantly outperforms existing approaches for this task. We make at least three contributions which may also be found useful for other classes of 3-D meshes. Firstly, we validate the discriminative performance of several local descriptors and show that the majority of them performs poorly on our data except for 3-D shape contexts. The reason for this is that many local descriptor schemas are not rich enough to capture subtle variations in form of bends which is typical for organic shapes. Secondly, based on the observation that the left and the right outer ear of an individual look very similar we raised the question how similar the ear shapes among arbitrary individuals are? In this work, we define a notion of distance between ear shapes as building block of a non-parametric shape model of the ear to better handle the anatomical variability in ear implant labeling. Thirdly, we introduce a conditional random field model with a variety of label priors to facilitate the semantic part-labeling of 3-D meshes of ear implants. In particular we introduce the concept of a global parametric transition prior to enforce transition boundaries between adjacent object parts with an a priori known parametric form. In this way we were able to overcome the issue of inadequate geometric cues (e.g., ridges, bumps, concavities) as natural indicators for the presence of part boundaries. The last part of this work offers an outlook to possible extensions of our methods, in particular the development of 3-D descriptors that are fast to compute whilst at the same time rich enough to capture the characteristic differences between objects residing in the same class.
628

Avaliação Ecotoxicológica de Agrotóxicos, seus Componentes e Afins: Teste para o Parâmetro Abelhas / Ecotoxicological Evaluation of Pesticides and their Components: Honeybees Protocols

Fernandes, Renata Oliveira de 31 July 2012 (has links)
Made available in DSpace on 2015-03-26T13:30:43Z (GMT). No. of bitstreams: 1 texto completo .pdf: 755196 bytes, checksum: 7cbe9fcb487f4294780e4e4ebdd9f492 (MD5) Previous issue date: 2012-07-31 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / In order to register a Pesticide, Brazilian Law is based on the evaluation and classification of the Pesticides Potential Hazard of the products. Also planned is the introduction of the Environmental and Human Health Risk Assessment for pesticides both based on study results that follow internationally recognized protocols, however, all these those study protocols, including bee toxicity, may not be adequate to represent the Brazilian reality, once they are applied mostly in temperate countries. In this sense, there is growing concern about bees and other pollinators that possess universal importance to the economy, agriculture and environmental balance on the planet. The disappearance of these insects of the hives, a phenomenon commonly known as CCD (Colony Collapse Disorder) has been discussed nowadays in Brazil and worldwide. This study aimed to identify and study the existing protocols for assessing the toxicity of pesticides to bees, as well as to discuss alternative studies to better adapt the protocols used abroad to the Brazilian scenario. In order to do that, studies were conducted on survival, walking behavior, proboscis extention response (P.E.R), nutrition and morphology of hypopharyngeal glands in africanized Apis mellifera exposed to the insecticide Azamax®. Results have demonstrated that bee survivial can be diminished when those insects are exposed to the insecticide at field doses and also that bees present morphological changes in the areas of acini hypopharyngeal glands, preventing its development when young worker bees were exposed to three different doses of insecticide. These findings indicated the importance of exploring pesticides sub lethal effects on insects, observing parameters such as walking behavior, PER (Proboscis Extension Response), nutrition and morphology of hypopharingeal glands, instead of only consider acute toxicity studies, like it happens nowadays, when reviewing the test protocols that already exists, considering not only hazard evaluation, but also the dose and exposition scenery. / A legislação brasileira para o Registro de Produtos Agrotóxicos, seus Componentes e Afins, tem como base a avaliação e classificação do Potencial de Periculosidade desses produtos, com previsão da Avaliação do Risco Ambiental e Toxicológico, fundamentados em protocolos internacionais para diversos parâmetros, inclusive estudos em abelhas, que podem não refletir a realidade brasileira. Um fator de preocupação no Brasil e no mundo é o Colapso das Colonias (CCD), fenômeno de desaparecimento das abelhas, onde uma das hipóteses discutidas para tal fato seria o uso indiscriminado de agrotóxicos. Este trabalho teve como objetivo levantar e estudar os protocolos existentes para a avaliação da toxicidade dos agrotóxicos em abelhas, bem como a discutir estudos alternativos para melhor adequá-los ao cenário brasileiro. Neste sentido, foram conduzidos estudos de sobrevivência, comportamento de caminhamento, extensão de probóscide e de nutrição sobre as glândulas hipofaringeanas em Apis mellifera africanizadas expostas ao inseticida Azamax®. Os resultados demonstraram que a sobrevivência das abelhas pode ser reduzida quando expostas ao inseticida na dose de campo e que as abelhas apresentaram alterações morfológicas nas áreas dos ácinos das glândulas hipofaringeanas em operárias jovens expostas a três doses diferentes do inseticida. Esses achados indicaram a importância de testar os efeitos subletais dos agrotóxicos em insetos, observando parâmetros como comportamento de caminhamento, PER (Proboscis Extension Response), nutrição sobre glândulas hipofaringeanas, ao invés de considerar apenas estudos de toxicidade aguda como acontece nos dias de hoje para a revisão dos protocolos de testes vigentes, considerando não apenas a avaliação do perigo, mas também da dose e o cenário de exposição.
629

Image Formation from a Large Sequence of RAW Images : performance and accuracy / Formation d’image à partir d’une grande séquence d’images RAW : performance et précision

Briand, Thibaud 13 November 2018 (has links)
Le but de cette thèse est de construire une image couleur de haute qualité, contenant un faible niveau de bruit et d'aliasing, à partir d'une grande séquence (e.g. des centaines) d'images RAW prises avec un appareil photo grand public. C’est un problème complexe nécessitant d'effectuer à la volée du dématriçage, du débruitage et de la super-résolution. Les algorithmes existants produisent des images de haute qualité, mais le nombre d'images d'entrée est limité par des coûts de calcul et de mémoire importants. Dans cette thèse, nous proposons un algorithme de fusion d'images qui les traite séquentiellement de sorte que le coût mémoire ne dépend que de la taille de l'image de sortie. Après un pré-traitement, les images mosaïquées sont recalées en utilisant une méthode en deux étapes que nous introduisons. Ensuite, une image couleur est calculée par accumulation des données irrégulièrement échantillonnées en utilisant une régression à noyau classique. Enfin, le flou introduit est supprimé en appliquant l'inverse du filtre équivalent asymptotique correspondant (que nous introduisons). Nous évaluons la performance et la précision de chaque étape de notre algorithme sur des données synthétiques et réelles. Nous montrons que pour une grande séquence d'images, notre méthode augmente avec succès la résolution et le bruit résiduel diminue comme prévu. Nos résultats sont similaires à des méthodes plus lentes et plus gourmandes en mémoire. Comme la génération de données nécessite une méthode d'interpolation, nous étudions également les méthodes d'interpolation par polynôme trigonométrique et B-spline. Nous déduisons de cette étude de nouvelles méthodes d'interpolation affinées / The aim of this thesis is to build a high-quality color image, containing a low level of noise and aliasing, from a large sequence (e.g. hundreds or thousands) of RAW images taken with a consumer camera. This is a challenging issue requiring to perform on the fly demosaicking, denoising and super-resolution. Existing algorithms produce high-quality images but the number of input images is limited by severe computational and memory costs. In this thesis we propose an image fusion algorithm that processes the images sequentially so that the memory cost only depends on the size of the output image. After a preprocessing step, the mosaicked (or CFA) images are aligned in a common system of coordinates using a two-step registration method that we introduce. Then, a color image is computed by accumulation of the irregularly sampled data using classical kernel regression. Finally, the blur introduced is removed by applying the inverse of the corresponding asymptotic equivalent filter (that we introduce).We evaluate the performance and the accuracy of each step of our algorithm on synthetic and real data. We find that for a large sequence of RAW images, our method successfully performs super-resolution and the residual noise decreases as expected. We obtained results similar to those obtained by slower and memory greedy methods. As generating synthetic data requires an interpolation method, we also study in detail the trigonometric polynomial and B-spline interpolation methods. We derive from this study new fine-tuned interpolation methods
630

Developing a professional identity : a grounded theory study of the experiences of pharmacy students undertaking an early period of pre-registration training

Quinn, Gemma L. January 2017 (has links)
Introduction: Trainee pharmacists are required to undertake a work-based pre-registration training placement (PRTP) in order to qualify. Literature exploring how this placement influences the development of students’ professionalism is sparse, however it is acknowledged that placements offer learning that can not be replicated in an academic environment. Following recent recommendations for the PRTP to be split into two six-month placements, the “sandwich” Master of Pharmacy (MPharm) programme at the University of Bradford offers a unique opportunity to study the impact of an early PRTP. This project aimed to understand the experiences of “sandwich” students during their early PRTP and generate a theory explaining how professionalism develops during this time. Methods: A constructivist grounded theory approach was taken. Fourteen students who had recently completed their early PRTP were interviewed using semi-structured, face-to-face interviews. A constant comparative approach to analysis was taken. Findings: The process developing a professional identity emerged as the core category. This consisted of four interlinking stages; reflection, selection of attributes, professional socialisation and perception of role. Developing a professional identity occurred under the conditions of realising the reality of the profession, developing practical knowledge and skills and learning from mentors. The consequence of developing a professional identity was that participants felt they were now a trainee professional. Discussion and conclusion: The theory demonstrates that developing a professional identity was the main process that occurred whilst MPharm students were on their early PRTP. Regulatory, funding and educational organisations should consider this when reviewing pharmacists’ training and students’ approach on return to university.

Page generated in 0.09 seconds