• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3520
  • 681
  • 681
  • 681
  • 681
  • 681
  • 681
  • 62
  • 4
  • Tagged with
  • 6094
  • 6094
  • 6094
  • 565
  • 518
  • 476
  • 374
  • 351
  • 283
  • 260
  • 238
  • 232
  • 188
  • 186
  • 174
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Estimation of volumetric optical coherence tomography measurements from 2D color fundus photographs using machine learning

Johnson, Samuel Steven 01 May 2019 (has links)
The optic nerve head is the location in the rear of the eye where the nerves exit the eye towards the brain. Swelling of the optic nerve head (ONH) is most accurately quantitatively assessed via volumetric measures using 3D spectral-domain optical coherence tomography (SD-OCT). However, SD-OCT is not always available as its use is primarily limited to specialized eye clinics rather than in primary care or telemedical settings. Thus, there is still a need for severity assessment using more widely available 2D fundus photographs. In this work, we propose machine-learning methods to locally estimate the volumetric measurements (akin to those produced by 3D SD-OCT images) of the optic disc swelling at each pixel location from only a 2D fundus photograph as the input. For training purposes, a thickness map of the swelling (reflecting the distance between the top and bottom surfaces of the ONH and surrounding retina) as measured from SD-OCT at each pixel location was used as the ground truth. First, a random-forest classifier was trained to output each thickness value from local fundus features pertaining to textural and color information. Eighty-eight image pairs of ONH-centered SD-OCT and registered fundus photographs from different subjects with optic disc swelling were used for training and evaluating the model in a leave-one-subject-out fashion. Comparing the thickness map from the proposed method to the ground truth via SD-OCT, a root-mean-square (RMS) error of 1.66 mm³ for the entire ONH region was achieved, and Spearman's correlation coefficient was R= 0.73. Regional volumes for the nasal, temporal, inferior, superior, and peripapillary regions had RMS errors of 0.64 mm³, 0.61 mm³, 0.74 mm³, 0.71 mm³, and 1.30 mm³, respectively, suggesting that there is enough evidence in a singular color fundus photograph to estimate local swelling information. Because of the recent success of deep-learning methods in imaging domains, a convolutional neural network was also trained using the same data as was used with the random forest classifier. Because training data is used to help fine tune model parameters for deep learning, a subset of twelve randomly selected patients was strictly withheld from the training process to be used for testing. Comparing the prediction results on the withheld data with the OCT ground truth, we achieved a root-mean-square (RMS) error of 2.07 mm³ for the entire ONH region. Regional volumes for the nasal, temporal, inferior, superior, and peripapillary regions had RMS errors of 0.75 mm³, 0.82 mm³, 0.85 mm³, 0.91 mm³, and 1.62 mm³, respectively. Although the errors are slightly higher than those from the random forest model, the test dataset was smaller as we could not use a leave-patient-out validation approach and this may not be representative of the whole dataset since results were not averaged as before. It is also known that deep learning models require larger training datasets to achieve similar results to traditional machine-learning methods. For these reasons, and the fact that the errors were close to those of traditional methods, we believe deep learning approaches for estimating local retinal thickness in cases of optic disc swelling still holds promise with larger datasets. Both of the proposed approaches allow for clinicians to assess optic nerve edema in both a qualitative and quantitative manner using strictly fundus photography. The predictions allow for overall optic nerve head volume to be calculated as well as regional and local volumes which was not possible before.
232

Nonrigid image registration using uncertain surface constraints with application to radiation therapy

Zhang, Cheng 01 January 2013 (has links)
An important research problem in image-guided adaptive radiation therapy (IGART) is how to accurately deform daily onboard cone-beam CT (CBCT) images to higher quality pretreatment magnetic resonance (MR) or fan-beam CT (FBCT) images, enabling cumulative dose to be evaluated and tumor response to be tracked. Particularly, in the case of IGART for prostate cancer, the question becomes to accurately register the critical organs, such as bladder, prostate and rectum. All are soft tissue and their boundaries can not always be identified using CBCT. As such it is challenging to register these soft organs precisely if the intensity difference serves as the only similarity measure. Organ surfaces are often contoured as part of the treatment planning phase. We therefore assume that the organ surfaces are provided either by manual or automatic segmentation and can be used to improve the correspondences at structure boundaries. Unfortunately these segmentations are often inaccurate so that the direct inclusion of the surfaces into the registration process may give little improvement. Originating from this specific problem, this work tries to answer a more generalized question. Given two intensity images and their associated inaccurate object surfaces, can we design a non-rigid registration algorithm with improved registration accuracy? Influenced by the ideas of data assimilation (DA) and smoothing spline regression (SSR), this report provides a solution consisting of three components: statistical shape modeling, spline-based surface estimation, and surface constrained non-rigid image registration. We surveyed different surface registration algorithms and evaluated their performance on real patient data. The shape models of the pelvic organs were built using training data. For the image registration, the input surface is a combination of the current observed and the one predicted by the shape model. This hybrid surface was validated to be more accurate and therefore the image registration constrained by it produced smaller registration error. Experiments were performed using both the simulated data and real clinical data. Results show that the proposed method achieves satisfactory improvement.
233

Mambu-RAM: a mud-aided random access MAC for underwater networks

Ehlers, Bryan 01 May 2019 (has links)
The following report considers a multi-beam directional network where nodes have linear arrays capable of performing digital beamforming. Digital beamforming has greatly advanced the feasibility of uncoordinated random access in directional networks. Unlike its analog counterpart, digital beamforming alleviates the need for complex beam scheduling algorithms. A key tradeoff in such systems is the number of transducers and the network throughput. In many practical scenarios of interest, the addition of many transducers is not possible due to size weight and power (SWaP) constraints. In this work, we show for SWaP constrained nodes, the addition of a linear multiuser detector (MUD) can be utilized to further increase the throughput. It is also discussed how the varying number of chips could be used in an adaptive fashion to achieve the maximum possible throughput. Lastly,considerations of other MUD receivers are introduced along with possible further improvements such as power control.
234

Autonomous tracking of mussels in a lab environment

Diken, Mehmed Bilal 01 May 2012 (has links)
Rapid global industrialization and increase in human population over the last century has exponentially increased the demand of fossil fuels for energy, (generation of electricity, fuel for transportation, etc), and an ever expanding list of fossil fuel derived chemicals, (synthetic fertilizers, pesticides, polymers, etc), are being used in all aspects of daily life. All of this has inevitably introduced large amounts of nitrogen to earth's Nitrogen Cycle. Thus, one of the challenges put forth by the National Academy of Engineering (NAE) has been the management of the Nitrogen Cycle. (NAE, 2008). It's anticipated that the effects of human-induced changes to the global Nitrogen Cycle will be profound, and needs to be better studied and understood. Previous investigations on mussels were conducted in artificial conditions, mostly in a small scale where the mussels were restricted and tethered. These studies were conducted to mature and test technologies for the possibility of developing systems to monitor mussel un-tethered/wirelessly. The wireless communication between mussels introduces electronics that needs to be mounted external to the shell of these animals. The big picture goal of the entire study is to enable scientist to monitor mussels un-tethered in their natural environment. To achieve this goal we must first verify the following assumption: >"The attachment of sensors and a small "backpack" containing wireless communicators and sensing electronics will have little or no impact on mussel mobility and survival." In this paper we explore multiple methods and devise a well functioning system that can autonomously identify, track and log the movement of mussels in a mesohabitat to be able to verify our assumption above. References: NAE. (2008). Grand Challenges for Engineering: Managing the Nitrogen Cycle. Retrieved from National Academy of Engineering: www.engineeringchallenges.org/cms/8996/9132.aspx
235

Globally optimal image segmentation incorporating region, shape prior and context information

Song, Qi 01 May 2012 (has links)
Accurate image segmentation is a challenging problem in the presence of weak boundary evidence, large object deformation, and serious mutual influence between multiple objects. In this thesis, we propose novel approaches to multi-object segmentation, which incorporates region, shape and context prior information to help overcome the stated challenges. The methods are based on a 3-D graph-theoretic framework. The main idea is to formulate the image segmentation problem as a discrete energy minimization problem. The prior region, shape and context information is incorporated by adding additional terms in our energy function , which are enforced using an arc-weighted graph representation. In particular, for optimal surface segmentation with region information, a ratio-form energy is employed, which contains both boundary term and regional term. To incorporate the shape and context prior information for multi-surface segmentation, additional shape-prior and context-prior terms are added, which penalize local shape change and local context change with respect to the prior shape model and the prior context model. We also propose a novel approach for the segmentation of terrain-like surfaces and regions with arbitrary topology. The context information is encoded by adding additional context term in the energy. Finally, a co-segmentation framework is proposed for tumor segmentation in PET-CT images, which makes use of the information from both modalities. The globally optimal solution for the segmentation of multiple objects can be obtained by computing a single maximum flow in a low-order polynomial time. The proposed method was validated on a variety of applications, including aorta segmentation in MRI images, intraretinal layer segmentation of OCT images, bladder-prostate segmentation in CT images, image resizing, robust delineation of pulmonary tumors in MVCBCT images, and co-segmentation of tumors in PET-CT images. The results demonstrated the applicability of the proposed approaches.
236

Optimum sensor placement for source localization and monitoring from received signal strength

Ibeawuchi, Stella-Rita Chioma 01 December 2010 (has links)
The problem of source localization has become increasingly important in recent years. In source localization, we are interested in estimating the location of a source using various relative position information. This research considers source localization using relative position information provided by Received Signal Strength (RSS) values under the assumption of log-normal shadowing. We investigate an important aspect of source localization, namely, that of optimally placing sensors. Two specific issues are investigated. The first is one of source monitoring. In this, one must place sensors around a localized source in an optimum fashion subject to the constraint that sensors are at least a certain distance from the source. The second is sensor placement for source localization. In this problem, we assume that the source is uniformly distributed in a circular region. The sensors must be placed in the complement of a larger concentric circle, to optimally localize the source. The monitoring problem is considered in N-dimensions. The localization problem is in 2-dimensions. The technical problem becomes one of investigating the underlying Fisher Information Matrix (FIM) for optimal monitoring and its expectation for optimal localization. The underlying problem then becomes one of placing sensors to maximize the determinant or the minimum eigenvalue of FIM (or its expectation) or minimize the trace of the inverse of the FIM (or its expectation).
237

Structured & learnable regularizers for modeling inverse problems in fast MRI

Biswas, Sampurna 01 December 2018 (has links)
Contemporary Magnetic Resonance imaging technology has enabled structural, anatomical and functional assessment of various organ systems by allowing in-vivo visualization of those organs in terms of the biophysical parameters of the tissue. MRI still suffers from the slow image acquisition. The prolonged scan time enforces trade-offs between image quality and image acquisition time, often resulting in low spatial resolution, low signal to noise ratio, presence of artifacts resulting from patient or physiological motion. Therefore, the inverse problems that arise from MR image reconstruction tend to maximize image quality from minimally acquired signal observations. We study study the manipulation of the number of observations, based on the knowledge of the underlying image structure. We start with studying an existing two step acquisition technique that seems to produce high quality reconstructions of dynamic MR images. We consider the recovery of a matrix X, which is simultaneously low rank and joint sparse, from few measurements of its columns using a two-step algorithm. Here, X captures a dynamic cardiac time-series. Our main contribution is to provide sufficient conditions on the measurement matrices that guarantee the recovery of such a matrix using a particular two-step algorithm. We illustrate the impact of the sampling pattern on reconstruction quality using breath held cardiac cine MRI and cardiac perfusion MRI data, while the utility of the algorithm to accelerate the acquisition is demonstrated on MR parameter mapping. In the next study, another structure is explored, where the underlying static image is assumed to be piece-wise constant. Here, we consider the recovery of a continuous domain piecewise constant image from its non-uniform Fourier samples using a convex matrix completion algorithm. We assume the discontinuities/edges of the image are localized to the zero levelset of a bandlimited function. The proposed algorithm reformulates the recovery of the unknown Fourier coefficients as a structured low-rank matrix completion problem. We show that exact recovery is possible with high probability when the edge set of the image satisfies an incoherency property, dependent on the geometry of the edge set curve. In the previous two studies, the acquisition time burden is manipulated by exploiting the inherent structure of the image to be recovered. We call this the self-learning strategy, where the structure is learned from the current set of measured data. Finally, we consider exemplar learning, where population generic features (structures) are learned from stored examples or training data. We introduce a novel framework to combine deep-learned priors along with complementary image regularization penalties to reconstruct free breathing and ungated cardiac MRI data from highly undersampled multi-channel measurements. This work showed the benefit in combining the deep-learned prior that exploits local and population generalizable redundancies together with self-learned priors, which capitalizes on patient specific information including cardiac and respiratory patterns. is facilitated by the proposed framework.
238

Modeling and Control of Magnetic Gear Dynamics in a Wind Turbine Drivetrain

Vournas, Danielle 27 September 2019 (has links)
This thesis looks at the modeling and simulation of linear and nonlinear magnetic gear dynamics in a wind turbine drivetrain. The objective is to lay the groundwork for analysis, modeling and optimization of control structures focused on pole-slip prevention. A classical mechanical two-mass torsion spring model is used as the basis for developing the dynamic system equations and Simulink models. The wind turbine torque input to the low speed rotor is modeled as a disturbance input, the generator torque is modeled as a controlled input, and the high-speed rotor speed is the only measured output. The nonlinear dynamics are linearized; and a state space model is built that utilizes both gear rotor speeds and the load angle as states. A state space feedback compensation controller is designed using pole placement techniques; and the sensitivity of the selected poles is tested across the full range of rated load angles. A full order observer is combined with state feedback compensation and the performance is evaluated with and without load angle speed regulation and integral action. A reduced order observer is designed with load torque estimation as an additional "metastate", which is then used to calculate the load angle, providing a better estimate than what the observer directly provides. Finally, the accuracy of the reduced order observer to is tested using real torque data from a wind turbine.
239

Sensory Relevance Models

Woods, Walt 08 August 2019 (has links)
This dissertation concerns methods for improving the reliability and quality of explanations for decisions based on Neural Networks (NNs). NNs are increasingly part of state-of-the-art solutions for a broad range of fields, including biomedical, logistics, user-recommendation engines, defense, and self-driving vehicles. While NNs form the backbone of these solutions, they are often viewed as "black box" solutions, meaning the only output offered is a final decision, with no insight into how or why that particular decision was made. For high-stakes fields, such as biomedical, where lives are at risk, it is often more important to be able to explain a decision such that the underlying assumptions might be verified. Prior methods of explaining NN decisions from images have been proposed, and fall into one of two categories: post-hoc analyses and attention networks. Post-hoc analyses, such as Grad-CAM, look at gradient information within the network to identify which regions of an image had the greatest effect on the final decision. Attention networks consist of structural changes to the network, which produce a mask through which the image is filtered before subsequent processing. The result is a heatmap highlighting regions which have the greatest effect on the final decision. This dissertation identifies two flaws with these approaches. First, these methods of explanation change wildly when the network is exposed to adversarial examples. When an imperceptible change to the input results in a significant change in the explanation, how reliable is the explanation? Second, these methods all produce a heatmap, which arguably does not have the definition required to truly understand which features are important. An algorithm that can draw a circle around a cat does not necessarily know that it is looking at a cat; it only recognizes the existence of a salient object. To address these flaws, this dissertation explores Sensory Relevance Models (SRMs), methods of explanation which utilize the full richness of the sensory domain. Initially motivated by a study of sparsity, several incarnations of SRMs were evaluated for their ability to resist adversarial examples and provide a more informative explanation than a heatmap. The first SRM formulation resulted from a study of network bisections, where NNs were split into a pre-processing step (the SRM) and a classifying step. The result of the pre-processing step would be made very sparse before being passed to the classifier. Visualizing the sparse, intermediate computation would potentially have yielded a heatmap-like explanation, with the potential for more textured explanations being formed off of the myriad features comprising each spatial location of the SRM's output. Two methods of achieving network bisection using auxiliary losses were devised, and both were successful in generating a sparse, intermediate representation which could be interpreted by a human observer. However, even a network bisection SRM which used only 26% of the input image did not result in decreased adversarial attack magnitude. Without solving the adversarial attack issue, any explanation based on the network bisection SRM would be as fragile as previously proposed methods. That led to the theory of Adversarial Explanations (AE). Rather than trying to produce an explanation in spite of adversarial examples, it made sense to work with them. For images, adversarial examples result in full-color, high-definition output. If they could be leveraged for explanations, they would solve both of the flaws identified with previous explanation techniques. Through new mathematical techniques, such as a stochastic Lipschitz constraint, and designing new mechanisms for NNs, such as the Half-Huber Rectified Linear Unit, AE were very successful. On ILSVRC 2012, a dataset of 1,281,167 images of size 224x224 comprising 1,000 different classes, the techniques for AE resulted in NNs 2.4x more resistant to adversarial attacks than the previous state-of-the-art, while retaining the same accuracy on clean data and using a smaller network. Explanations generated using AE possessed very discernible features, with a more obvious interpretation when compared to heatmap-based explanations. As AE works with the non-linearities of NNs rather than against them, the explanations are relevant for a much larger neighborhood of inputs. Furthermore, it was demonstrated that the new adversarial examples produced by AE could be annotated and fed back into the training process, yielding further improved adversarial resistance through a Human-In-The-Loop pipeline. Altogether, this dissertation demonstrates significant advancements in the field of machine learning, particularly for explaining the decisions of NNs. At the time of publication, AE is an unparalleled technique, producing more reliable, higher-quality explanations for image classification decisions than were previously available. The modifications presented also demonstrate ways in which adversarial attacks might be mitigated, improving the security of NNs. It is my hope that this work provides a basis for future work in the realms of both adversarial resistance and explainable NNs, making algorithms more reliable for industry fields where accountability matters, such as biomedical or autonomous vehicles.
240

New Approaches for Memristive Logic Computations

Aljafar, Muayad Jaafar 06 June 2018 (has links)
Over the past five decades, exponential advances in device integration in microelectronics for memory and computation applications have been observed. These advances are closely related to miniaturization in integrated circuit technologies. However, this miniaturization is reaching the physical limit (i.e., the end of Moore's Law). This miniaturization is also causing a dramatic problem of heat dissipation in integrated circuits. Additionally, approaching the physical limit of semiconductor devices in fabrication process increases the delay of moving data between computing and memory units hence decreasing the performance. The market requirements for faster computers with lower power consumption can be addressed by new emerging technologies such as memristors. Memristors are non-volatile and nanoscale devices and can be used for building memory arrays with very high density (extending Moore's law). Memristors can also be used to perform stateful logic operations where the same devices are used for logic and memory, enabling in-memory logic. In other words, memristor-based stateful logic enables a new computing paradigm of combining calculation and memory units (versus von Neumann architecture of separating calculation and memory units). This reduces the delays between processor and memory by eliminating redundant reloading of reusable values. In addition, memristors consume low power hence can decrease the large amounts of power dissipation in silicon chips hitting their size limit. The primary focus of this research is to develop the circuit implementations for logic computations based on memristors. These implementations significantly improve the performance and decrease the power of digital circuits. This dissertation demonstrates in-memory computing using novel memristive logic gates, which we call volistors (voltage-resistor gates). Volistors capitalize on rectifying memristors, i.e., a type of memristors with diode-like behavior, and use voltage at input and resistance at output. In addition, programmable diode gates, i.e., another type of logic gates implemented with rectifying memristors are proposed. In programmable diode gates, memristors are used only as switches (unlike volistor gates which utilize both memory and switching characteristics of the memristors). The programmable diode gates can be used with CMOS gates to increase the logic density. As an example, a circuit implementation for calculating logic functions in generalized ESOP (Exclusive-OR-Sum-of-Products) form and multilevel XOR network are described. As opposed to the stateful logic gates, a combination of both proposed logic styles decreases the power and improves the performance of digital circuits realizing two-level logic functions Sum-of-Products or Product-of-Sums. This dissertation also proposes a general 3-dimentional circuit architecture for in-memory computing. This circuit consists of a number of stacked crossbar arrays which all can simultaneously be used for logic computing. These arrays communicate through CMOS peripheral circuits.

Page generated in 0.1322 seconds