• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 31
  • 31
  • 14
  • 8
  • 8
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

TIME-OF-FLIGHT NEUTRON CT FOR ISOTOPE DENSITY RECONSTRUCTION AND CONE-BEAM CT SEPARABLE MODELS

Thilo Balke (15348532) 26 April 2023 (has links)
<p>There is a great need for accurate image reconstruction in the context of non-destructive evaluation. Major challenges include the ever-increasing necessity for high resolution reconstruction with limited scan and reconstruction time and thus fewer and noisier measurements. In this thesis, we leverage advanced Bayesian modeling of the physical measurement process and probabilistic prior information of the image distribution in order to yield higher image quality despite limited measurement time. We demonstrate in several ways efficient computational performance through the exploitation of more efficient memory access, optimized parametrization of the system model, and multi-pixel parallelization. We demonstrate that by building high-fidelity forward models that we can generate quantitatively reliable reconstructions despite very limited measurement data.</p> <p><br></p> <p>In the first chapter, we introduce an algorithm for estimating isotopic densities from neutron time-of-flight imaging data. Energy resolved neutron imaging (ERNI) is an advanced neutron radiography technique capable of non-destructively extracting spatial isotopic information within a given material. Energy-dependent radiography image sequences can be created by utilizing neutron time-of-flight techniques. In combination with uniquely characteristic isotopic neutron cross-section spectra, isotopic areal densities can be determined on a per-pixel basis, thus resulting in a set of areal density images for each isotope present in the sample. By preforming ERNI measurements over several rotational views, an isotope decomposed 3D computed tomography is possible. We demonstrate a method involving a robust and automated background estimation based on a linear programming formulation. The extremely high noise due to low count measurements is overcome using a sparse coding approach. It allows for a significant computation time improvement, from weeks to a few hours compared to existing neutron evaluation tools, enabling at the present stage a semi-quantitative, user-friendly routine application. </p> <p><br></p> <p>In the second chapter, we introduce the TRINIDI algorithm, a more refined algorithm for the same problem.</p> <p>Accurate reconstruction of 2D and 3D isotope densities is a desired capability with great potential impact in applications such as evaluation and development of next-generation nuclear fuels.</p> <p>Neutron time-of-flight (TOF) resonance imaging offers a potential approach by exploiting the characteristic neutron adsorption spectra of each isotope.</p> <p>However, it is a major challenge to compute quantitatively accurate images due to a variety of confounding effects such as severe Poisson noise, background scatter, beam non-uniformity, absorption non-linearity, and extended source pulse duration. We present the TRINIDI algorithm which is based on a two-step process in which we first estimate the neutron flux and background counts, and then reconstruct the areal densities of each isotope and pixel.</p> <p>Both components are based on the inversion of a forward model that accounts for the highly non-linear absorption, energy-dependent emission profile, and Poisson noise, while also modeling the substantial spatio-temporal variation of the background and flux. </p> <p>To do this, we formulate the non-linear inverse problem as two optimization problems that are solved in sequence.</p> <p>We demonstrate on both synthetic and measured data that TRINIDI can reconstruct quantitatively accurate 2D views of isotopic areal density that can then be reconstructed into quantitatively accurate 3D volumes of isotopic volumetric density.</p> <p><br></p> <p>In the third chapter, we introduce a separable forward model for cone-beam computed tomography (CT) that enables efficient computation of a Bayesian model-based reconstruction. Cone-beam CT is an attractive tool for many kinds of non-destructive evaluation (NDE). Model-based iterative reconstruction (MBIR) has been shown to improve reconstruction quality and reduce scan time. However, the computational burden and storage of the system matrix is challenging. In this paper we present a separable representation of the system matrix that can be completely stored in memory and accessed cache-efficiently. This is done by quantizing the voxel position for one of the separable subproblems. A parallelized algorithm, which we refer to as zipline update, is presented that speeds up the computation of the solution by about 50 to 100 times on 20 cores by updating groups of voxels together. The quality of the reconstruction and algorithmic scalability are demonstrated on real cone-beam CT data from an NDE application. We show that the reconstruction can be done from a sparse set of projection views while reducing artifacts visible in the conventional filtered back projection (FBP) reconstruction. We present qualitative results using a Markov Random Field (MRF) prior and a Plug-and-Play denoiser.</p>
22

MAJORIZED MULTI-AGENT CONSENSUS EQUILIBRIUM FOR 3D COHERENT LIDAR IMAGING

Tony Allen (18502518) 06 May 2024 (has links)
<pre>Coherent lidar uses a chirped laser pulse for 3D imaging of distant targets.However, existing coherent lidar image reconstruction methods do not account for the system's aperture, resulting in sub-optimal resolution.Moreover, these methods use majorization-minimization for computational efficiency, but do so without a theoretical treatment of convergence.<br> <br>In this work, we present Coherent Lidar Aperture Modeled Plug-and-Play (CLAMP) for multi-look coherent lidar image reconstruction.CLAMP uses multi-agent consensus equilibrium (a form of PnP) to combine a neural network denoiser with an accurate physics-based forward model.CLAMP introduces an FFT-based method to account for the effects of the aperture and uses majorization of the forward model for computational efficiency.We also formalize the use of majorization-minimization in consensus optimization problems and prove convergence to the exact consensus equilibrium solution.Finally, we apply CLAMP to synthetic and measured data to demonstrate its effectiveness in producing high-resolution, speckle-free, 3D imagery.</pre><p></p>
23

Framework for ambient assistive living : handling dynamism and uncertainty in real time semantic services provisioning / Environnement logiciel pour l’assistance à l’autonomie à domicile : gestion de la dynamique et de l’incertitude pour la fourniture sémantique en temps réel de services d’assistance

Aloulou, Hamdi 25 June 2013 (has links)
L’hétérogénéité des environnements ainsi que la diversité des profils et des besoins des patients représentent des contraintes majeures qui remettent en question l’utilisation à grande échelle des systèmes d’assistance à l’autonomie à domicile (AAL). En effet, afin de répondre à l’évolution de l’état des patients et de leurs besoins humains, les environnements AAL sont en évolution continue par l’introduction ou la disparition de capteurs, de dispositifs d’interaction et de services d’assistance. Par conséquent, une plateforme générique et dynamique capable de s’adapter à différents environnements et d’intégrer de nouveaux capteurs, dispositifs d’interaction et services d’assistance est requise. La mise en œuvre d’un tel aspect dynamique peut produire une situation d’incertitude dérivée des problèmes techniques liés à la fiabilité des capteurs ou à des problèmes de réseau. Par conséquent, la notion d’incertitude doit être introduite dans la représentation de contexte et la prise de décision afin de faire face à ce problème. Au cours de cette thèse, j’ai développé une plateforme dynamique et extensible capable de s’adapter à différents environnements et aux besoins des patients. Ceci a été réalisé sur la base de l’approche Plug&Play sémantique que j’ai proposé. Afin de traiter le problème d’incertitude de l’information lié à des problèmes techniques, j’ai proposé une approche de mesure d’incertitude en utilisant les caractéristiques intrinsèques des capteurs et leurs comportements fonctionnels. J’ai aussi fourni un modèle de représentation sémantique et de raisonnement avec incertitude associé avec la théorie de Dempster-Shafer (DST) pour la prise de décision / The heterogeneity of the environments as well as the diversity of patients’ needs and profiles are major constraints that challenge the spread of ambient assistive living (AAL) systems. AAL environments are usually evolving by the introduction or the disappearance of sensors, devices and assistive services to respond to the evolution of patients’ conditions and human needs. Therefore, a generic framework that is able to adapt to such dynamic environments and to integrate new sensors, devices and assistive services at runtime is required. Implementing such a dynamic aspect may produce an uncertainty derived from technical problems related to sensors reliability or network problems. Therefore, a notion of uncertain should be introduced in context representation and decision making in order to deal with this problem. During this thesis, I have developed a dynamic and extendible framework able to adapt to different environments and patients’ needs. This was achieved based on my proposed approach of semantic Plug&Play mechanism. In order to handle the problem of uncertain information related to technical problems, I have proposed an approach for uncertainty measurement based on intrinsic characteristics of the sensors and their functional behaviors, then I have provided a model of semantic representation and reasoning under uncertainty coupled with the Dempster-Shafer Theory of evidence (DST) for decision making
24

RECONSTRUCTION OF HIGH-SPEED EVENT-BASED VIDEO USING PLUG AND PLAY

Trevor D. Moore (5930756) 16 January 2019 (has links)
<div>Event-Based cameras, also known as neuromophic cameras or dynamic vision sensors, are an imaging modality that attempt to mimic human eyes by asynchronously measuring contrast over time. If the contrast changes sufficiently then a 1-bit event is output, indicating whether the contrast has gone up or down. This stream of events is sparse, and its asynchronous nature allows the pixels to have a high dynamic range and high temporal resolution. However, these events do not encode the intensity of the scene, resulting in an inverse problem to estimate intensity images from the event stream. Hybrid event-based cameras, such as the DAVIS camera, provide a reference intensity image that can be leveraged when estimating the intensity at each pixel during an event. Normally, inverse problems are solved by formulating a forward and prior model and minimizing the associated cost, however, for this problem, the Plug and Play (P&P) algorithm is used to solve the inverse problem. In this case, P&P replaces the prior model subproblem with a denoiser, making the algorithm modular, easier to implement. We propose an idealized forward model that assumes the contrast steps measured by the DAVIS camera are uniform in size to simplify the problem. We show that the algorithm can swiftly reconstruct the scene intensity at a user-specified frame rate, depending on the chosen denoiser’s computational complexity and the selected frame rate.</div>
25

Experimental multiuser secure quantum communications

Bogdanski, Jan January 2009 (has links)
We are currently experiencing a rapid development of quantum information, a new branch of science, being an interdisciplinary of quantum physics, information theory, telecommunications, computer science, and many others. This new science branch was born in the middle of the eighties, developed rapidly during the nineties, and in the current decade has brought a technological breakthrough in creating secure quantum key distribution (QKD), quantum secret sharing, and exciting promises in diverse technological fields. Recent QKD experiments have achieved high rate QKD at 200 km distance in optical fiber. Significant QKD results have also been achieved in free-space. Due to the rapid broadband access deployment in many industrialized countries and the standing increasing transmission security treats, the natural development awaiting quantum communications, being a part of quantum information, is its migration into commercial switched telecom networks. Such a migration concerns both multiuser quantum key distribution and multiparty quantum secret sharing that have been the main goal of my PhD studies. They are also the main concern of the thesis. Our research efforts in multiuser QKD has led to a development of the five-user setup for transmissions over switched fiber networks in a star and in a tree configuration. We have achieved longer secure quantum information distances and implemented more nodes than other multi-user QKD experiments. The measurements have shown feasibility of multiuser QKD over switched fiber networks, using standard fiber telecom components. Since circular architecture networks are important parts of both intranets and the Internet, Sagnac QKD has also been a subject of our research efforts. The published experiments in this area have been very few and results were not encouraging, mainly due to the single mode fiber (SMF) birefringence. Our research has led to a development of a computer controlled birefringence compensation in Sagnac that open the door to both classical and quantum Sagnac applications. On the quantum secret sharing side, we have achieved the first quantum secret sharing experiment over telecom fiber in a five-party implementation using the "plug &amp; play" setup and in a four-party implementation using Sagnac configuration. The setup measurements have shown feasibility and scalability of multiparty quantum communication over commercial telecom fiber networks.
26

A Menu-based Universal Control Protocol / Ett menybaserat universiellt kontroll-protokoll

Gustafsson, Per-Ola, Ohlsson, Marcus January 2002 (has links)
<p>This thesis-project aims to research the possibilities of new wireless technologies in general control-situations. We have studied different existing control protocols, and developed a new protocol focusing on textbased menus. Our protocol is scaleable, easy to implement, and platform- and media independent. Since our protocol supports Plug and Play with dynamically allocated id’s, it does not require a unique id in the hardware. </p><p>To test the protocol we have developed a prototype system, consisting of a mobile phone connected to a server, which in turn is connected to two slave units, controlling peripheral equipment on 220 Volt. </p><p>The phone is an <i>Ericsson T28,</i> equipped with a Bluetooth unit. The server is runningthe real-time OS <i>eCos </i>on an A<i>RM 7TDMI Evaluation Kit</i>, and the slave units consist of two developer boards equipped with <i>PIC-processors</i>. Communication between the phone and the server is done over Bluetooth. However we did not find a working Bluetooth protocol stack ported to eCos, so a serial cable was used instead. Communication between the server and the slaves is done over a RS-485 serial network which simulates the traffic over a radio-network. </p><p>The results show that our protocol is working, and that our system would be easy to implement, cheap to produce and very scalable.</p>
27

Framework for ambient assistive living : handling dynamism and uncertainty in real time semantic services provisioning

Aloulou, Hamdi 25 June 2014 (has links) (PDF)
The heterogeneity of the environments as well as the diversity of patients' needs and profiles are major constraints that challenge the spread of ambient assistive living (AAL) systems. AAL environments are usually evolving by the introduction or the disappearance of sensors, devices and assistive services to respond to the evolution of patients' conditions and human needs. Therefore, a generic framework that is able to adapt to such dynamic environments and to integrate new sensors, devices and assistive services at runtime is required. Implementing such a dynamic aspect may produce an uncertainty derived from technical problems related to sensors reliability or network problems. Therefore, a notion of uncertain should be introduced in context representation and decision making in order to deal with this problem. During this thesis, I have developed a dynamic and extendible framework able to adapt to different environments and patients' needs. This was achieved based on my proposed approach of semantic Plug&Play mechanism. In order to handle the problem of uncertain information related to technical problems, I have proposed an approach for uncertainty measurement based on intrinsic characteristics of the sensors and their functional behaviors, then I have provided a model of semantic representation and reasoning under uncertainty coupled with the Dempster-Shafer Theory of evidence (DST) for decision making
28

Home Devices Mediation using ontology alignment and code generation techniques / La médiation d'interaction entre les équipements domestiques basés sur l'alignement d'ontologies et la génération du code

El Kaed, Charbel 13 January 2012 (has links)
Les protocoles plug-and-play couplés avec les architectures logicielles rendent nos maisons ubiquitaires. Les équipements domestiques qui supportent ces protocoles peuvent être détectés automatiquement, configurés et invoqués pour une tâche donnée. Actuellement, plusieurs protocoles coexistent dans la maison, mais les interactions entre les dispositifs ne peuvent pas être mises en action à moins que les appareils supportent le même protocole. En plus, les applications qui orchestrent ces dispositifs doivent connaître à l'avance les noms des services et dispositifs. Or, chaque protocole définit un profil standard par type d'appareil. Par conséquent, deux appareils ayant le même type et les mêmes fonctions mais qui supportent un protocole différent publient des interfaces qui sont souvent sémantiquement équivalentes mais syntaxiquement différentes. Ceci limite alors les applications à interagir avec un service similaire. Dans ce travail, nous présentons une méthode qui se base sur l'alignement d'ontologie et la génération automatique de mandataire pour parvenir à une adaptation dynamique de services. / Ubiquitous systems imagined by Mark Weiser are emerging thanks to the development of embedded systems and plug-n-play protocols like the Universal Plug aNd Play (UPnP), the Intelligent Grouping and Resource Sharing (IGRS), the Device Pro le for Web Services (DPWS) and Apple Bonjour. Such protocols follow the service oriented architecture (SOA) paradigm and allow an automatic device and service discovery in a home network. Once devices are connected to the local network, applications deployed for example on a smart phone, a PC or a home gateway, discover the plug-n-play devices and act as control points. The aim of such applications is to orchestrate the interactions between the devices such as lights, TVs and printers, and their corresponding hosted services to accomplish a specific human daily task like printing a document or dimming a light. Devices supporting a plug-n-play protocol announce their hosted services each in its own description format and data content. Even similar devices supporting the same services represent their capabilities in a different representation format and content. Such heterogeneity along with the protocols layers diversity, prevent applications to use any available equivalent device on the network to accomplish a specific task. For instance, a UPnP printing application cannot interacts with an available DPWS printer on the network to print a document. Designing applications to support multiple protocols is time consuming since developers must implement the interaction with each device pro le and its own data description. Additionally, the deployed application must use multiple protocols stacks to interact with the device. More over, application vendors and telecoms operators need to orchestrate devices through a common application layer, independently from the protocol layers and the device description. To accomplish interoperability between plug-n-play devices and applications, we propose a generic approach which consists in automatically generating proxies based on an ontology alignment. The alignment contains the correspondences between two equivalent devices descriptions. Such correspondences actually represent the proxy behaviour which is used to provide interoperability between an application and a plug and play device. For instance, the generated proxy will announce itself on the network as a UPnP standard printer and will control the DPWS printer. Consequently, the UPnP printing application will interact transparently with the generated proxy which adapts and transfers the invocations to the real DPWS printer. We implemented a prototype as a proof of concept that we evaluated on several real UPnP and DPWS equivalent devices.
29

A Menu-based Universal Control Protocol / Ett menybaserat universiellt kontroll-protokoll

Gustafsson, Per-Ola, Ohlsson, Marcus January 2002 (has links)
This thesis-project aims to research the possibilities of new wireless technologies in general control-situations. We have studied different existing control protocols, and developed a new protocol focusing on textbased menus. Our protocol is scaleable, easy to implement, and platform- and media independent. Since our protocol supports Plug and Play with dynamically allocated id’s, it does not require a unique id in the hardware. To test the protocol we have developed a prototype system, consisting of a mobile phone connected to a server, which in turn is connected to two slave units, controlling peripheral equipment on 220 Volt. The phone is an Ericsson T28, equipped with a Bluetooth unit. The server is runningthe real-time OS eCos on an ARM 7TDMI Evaluation Kit, and the slave units consist of two developer boards equipped with PIC-processors. Communication between the phone and the server is done over Bluetooth. However we did not find a working Bluetooth protocol stack ported to eCos, so a serial cable was used instead. Communication between the server and the slaves is done over a RS-485 serial network which simulates the traffic over a radio-network. The results show that our protocol is working, and that our system would be easy to implement, cheap to produce and very scalable.
30

ADVANCED PRIOR MODELS FOR ULTRA SPARSE VIEW TOMOGRAPHY

Maliha Hossain (17014278) 26 September 2023 (has links)
<p dir="ltr">There is a growing need to reconstruct high quality tomographic images from sparse view measurements to accommodate time and space constraints as well as patient well-being in medical CT. Analytical methods perform poorly with sub-Nyquist acquisition rates. In extreme cases with 4 or fewer views, effective reconstruction approaches must be able to incorporate side information to constrain the solution space of an otherwise under-determined problem. This thesis presents two sparse view tomography problems that are solved using techniques that exploit. knowledge of the structural and physical properties of the scanned objects.</p><p dir="ltr"><br></p><p dir="ltr">First, we reconstruct four view CT datasets obtained from an in-situ imaging system used to observe Kolsky bar impact experiments. Test subjects are typically 3D-printed out ofhomogeneous materials into shapes with circular cross sections. Two advanced prior modelsare formulated to incorporate these assumptions in a modular fashion into the iterativeradiographic inversion framework. The first is a Multi-Slice Fusion and the latter is TotalVariation regularization that operates in cylindrical coordinates.</p><p dir="ltr"><br></p><p dir="ltr">In the second problem, artificial neural networks (NN) are used to directly invert a temporal sequence of four radiographic images of discontinuities propagating through an imploding steel shell. The NN is fed the radiographic features that are robust to scatter and is trained using density simulations synthesized as solutions to hydrodynamic equations of state. The proposed reconstruction pipeline learns and enforces physics-based assumptions of hydrodynamics and shock physics to constrain the final reconstruction to a space ofphysically admissible solutions.</p>

Page generated in 0.0464 seconds