Spelling suggestions: "subject:"electrical anda computer engineering"" "subject:"electrical anda computer ingineering""
221 |
Enhancing Program Soft Error Resilience through Algorithmic ApproachesChen, Sui 03 November 2016 (has links)
The rising count and shrinking feature size of transistors within modern computers is making them increasingly vulnerable to various types of soft faults. This problem is especially acute in high-performance computing (HPC) systems used for scientific computing, because these systems include many thousands of compute cores and nodes, all of which may be utilized in a single large-scale run.
The increasing vulnerability of HPC applications to errors induced by soft faults is motivating extensive work on techniques to make these applications more resilient to such faults, ranging from generic techniques such as replication or checkpoint/restart to algorithm-specific error detection and tolerance techniques. Effective use of such techniques requires a detailed understanding of how a given application is affected by soft faults to ensure that (i) efforts to improve application resilience are spent in the code regions most vulnerable to faults, (ii) the appropriate resilience techniques is applied to each code region, and (iii) the understanding be obtained in an efficient manner.
This thesis presents two tools: FaultTelescope helps application developers view the routine and application vulnerability to soft errors while ErrorSight helps perform modular fault characteristics analysis for more complex applications. This thesis also illustrates how these tools can be used in the context of representative applications and kernels. In addition to providing actionable insights into application behavior, the tools automatically selects the number of fault injection experiments required to efficiently generation error profiles of an application, ensuring that the information is statistically well-grounded without performing unnecessary experiments.
|
222 |
Automatic Detection, Segmentation and Tracking of Vehicles in Wide-Area Aerial ImageryGao, Xin, Gao, Xin January 2016 (has links)
Object detection is crucial for many research areas in computer vision, image analysis and pattern recognition. Since vehicles in wide-area images appear with variable shape and size, illumination changes, partial occlusion, and background clutter, automatic detection has often been a challenging task. We present a brief study of various techniques for object detection and image segmentation, and contribute to a variety of algorithms for detecting vehicles in traffic lanes from two low-resolution aerial video datasets. We present twelve detection algorithms adapted from previously published work, and we propose two post-processing schemes in contrast to four existing schemes to reduce false detections. We present the results of several experiments for quantitative evaluation by combining detection algorithms before and after using a post-processing scheme. Manual segmentation of each vehicle in the cropped frames serves as the ground truth. We classify several types of detections by comparing the binary detection output to the ground truth in each frame, and use two sets of evaluation metrics to measure the performance. A pixel classification scheme is also derived for spatial post-processing applied to seven detection algorithms, among which two algorithms are selected for sensitivity analysis with respect to a range of overlap ratios. Six tracking algorithms are selected for performance analysis for overall accuracy under four different scenarios for sample frames in Tucson dataset.
|
223 |
COMPOSITE KERNEL FEATURE ANALYSIS FOR CANCER CLASSIFICATIONMyla, Sindhu 02 April 2010 (has links)
Computed tomographic (CT) colonography, or virtual colonoscopy, is a promising technique for screening colorectal cancers by use of CT scans of the colon. Current CT technology allows a single image set of the colon to be acquired in 10-20 seconds, which translates into an easier, more comfortable examination than is available with other screening tests. Currently, however, interpretation of an entire CT colonography examination is time-consuming, and the reader performance for polyp detection varies substantially. To overcome these difficulties while providing a high detection performance of polyps, researchers are developing computer-aided detection (CAD) schemes that automatically detect suspicious lesions in CT colonography images. The overall goal of this study is to achieve a high performance in the detection of polyps on CT colonographic images by effectively incorporating an appearance-based object recognition approaches into a model-based CAD scheme. Our studies are focused in developing a fast kernel feature analysis that can efficiently differentiate polyps from false positives and thus improve the detection performance of polyps. We have developed a novel method of selecting kernel functions that are appropriate for the given data set and then use their linear combination in the construction of Kernel Gram matrix which can then used for efficient reconstruction of feature space. The main contribution of this work lies in providing a Composite kernel Matrix that involves appearance-based approach to improve kernel feature analysis for the classification of texture-based features. We evaluated our proposed kernel feature analysis on texture-based features that were extracted from the polyp candidates generated by our shape-based CAD scheme.
|
224 |
Fiber Bragg Grating (FBG) Based Chemical SensorSethuraman, Gopakumar 01 August 2008 (has links)
In this work, reagentless fiber optic-based chemical sensors for water quality testing were fabricated by coating fiber Bragg gratings with the glassy polymer cellulose acetate. With this polymeric matrix capable of localizing or concentrating chemical constituents within its structure, immersion of the coated grating in various chemical solutions causes the rigid polymer to expand and mechanically strain the glass fiber. The corresponding changes in the periodicity of the grating subsequently result in altered Bragg-reflected responses. A high-resolution tunable fiber ring laser interrogator is used to obtain room temperature reflectance spectrograms from two fiber gratings at 1550 nm and 1540 nm wavelengths. Rapidly swept measurements of the full spectral shapes yield real-time chemical detection and identification. With deionized water as a reference, wavelength shifts in the reflectivity transition edge from –82 pm to +43 pm and changes in response bandwidth from –27 pm to +42 pm are used to identify uniquely a diverse selection of chemical analytes.
|
225 |
Analytical Model for Relating FPGA Logic and Routing Architecture Parameters to Post-Routing WirelengthSoni, Arpit, Soni, Arpit January 2016 (has links)
Analytical models have been introduced for rapidly evaluating the impact of architectural design choices on FPGA performance through model-based trend analysis. Modeling wirelength is a critical problem since channel width can be expressed as a function of total net length in a design, which is an indicator of routability for an FPGA. Furthermore, performance indicators, such as critical path delay and power consumption, are functions of net capacitance, which in turn is a function of net length. The analytical models to this date mainly originate from extracting circuit characteristics from post-placement stage of the CAD flow, which instills a strong binding between the model and the optimization objective of the CAD flow. Furthermore, these models primarily take only logic architecture features into account. In this study, we present a post-routing wirelength model that takes into account both logic and routing architectural parameters, and that does not rely on circuit characteristics extracted from any stage of the FPGA CAD flow. We apply a methodological approach to model parameter tuning as opposed to relying on a curve-fitting method, and show that our model accurately captures the experimental trends in wirelength with respect to changes in logic and routing architecture parameters individually. We demonstrate that the model accuracy is not sacrificed even if the performance objective of the CAD flow changes or the algorithms used by individual stages of the CAD flow (technology mapping, clustering, and routing) change. We swap the training and validation benchmarks, and show that our model development approach is robust and the model accuracy is not sacrificed. We evaluate our model based on new set of benchmarks that are not part of the training and validation benchmarks, and demonstrate its superiority over the state of the art. Based on the swapping based experiments, we show that the model parameters take values in a fixed range. We verify that this range holds its validity even for benchmarks that are not part of the training and validation benchmarks. We finally show that our model maintains a good estimation of the empirical trends even when very large values are used for the logic block architecture parameter.
|
226 |
Knowledge Enhanced Compressive Measurement Design: Detection and Estimation TasksHuang, James, Huang, James January 2016 (has links)
Compressive imaging exploits the inherent sparsity/compressibility of natural scenes to reduce the number of measurements required for reliable reconstruction/recovery. In many applications, however, additional scene prior information beyond sparsity (such as natural scene statistics) and task prior information may also be available. While current efforts on compressive measurement design attempt to exploit such scene and task priors in a heuristic/ad-hoc manner, in this dissertation, we develop a principled information-theoretic approach to this design problem that is able to fully exploit a probabilistic description (i.e. scene prior) of relevant scenes for a given task, along with the appropriate physical design constraints (e.g. photon count/exposure time) towards maximizing the system performance. We apply this information-theoretic framework to optimize compressive measurement designs, in EO/IR and X-ray spectral bands, for various detection/classification and estimation tasks. More specifically, we consider image reconstruction and target detection/classification tasks, and for each task we develop an information-optimal design framework for both static and adaptive measurements within parallel and sequential measurement architectures. For the image reconstruction task we show that the information-optimal static compressive measurement design is able to achieve significantly better compression ratios (and also reduced detector count, readout power/bandwidth) relative to various state-of-the-art compressive designs in the literature. Moreover, within a sequential measurement architecture our information-optimal adaptive design is able to successfully learn scene information online, i.e. from past measurement, and adapt next measurement (in a greedy sense) towards improving the measurement information efficiency, thereby providing additional performance gains beyond the corresponding static measurement design. We also develop a non-greedy adaptive measurement design framework for a face recognition task that is able to surpass the greedy adaptive design performance, by (strategically) maximizing the the long-term cumulative system performance over all measurements. Such a non-greedy adaptive design is also able to predict the optimal number of measurements for a fixed system measurement resource (e.g. photon-count). Finally, we develop a scalable (computationally) information-theoretic design framework to an X-ray threat detection task and demonstrate that information-optimized measurements can achieve a 99% threat detection threshold using 4x fewer exposures compared to a conventional system. Equivalently, the false alarm rate of the optimized measurements is reduced by nearly an order of magnitude relative to the conventional measurement design.
|
227 |
Improved Subset Generation For The MU-DecoderAgarwal, Utsav 21 February 2017 (has links)
The MU-Decoder is a hardware subset generator that finds use in partial reconfiguration
of FPGAs and in numerous other applications. It is capable of generating a set S of subsets
of a large set Z_n with n elements. If the subsets in S satisfy the isomorphic totally-
ordered property, then the MU-Decoder works very efficiently to produce a set of u subsets in O(log n) time and Θ(n √u log n) gate cost. In contrast, a vain approach requires Θ(un)
gate cost. We show that this low cost for the MU-Decoder can be achieved without the isomorphism constraint, thereby allowing S to include a much wider range of subsets. We also show that if additional constraints on the relative sizes of the subsets in S can be placed, then u subsets can be generated with Θ(n √u) cost. This uses a new hardware enhancement proposed in this thesis. Finally, we show that by properly selecting S and by using some elements of traditional methods, a set of Θ (un^log( log (n/log n))) subsets can be produced with Θ(n √u) cost.
|
228 |
Design and verification of physical layer architecture for a 1-wire sensor communication busPeiffer, Benjamin Michael 01 May 2011 (has links)
Within this thesis, the problem of device and sensor communication is discussed. Recent sensor network implementations and deployments at the University of Iowa IIHR-Hydroscience and Engineering (IIHR) and the Iowa Flood Center (IFC) have motivated the need for an efficient, shared, 1-wire communication scheme. Possible solutions will be explored and discussed briefly within this chapter. The focus of this thesis is the development of a particular solution for a 1-wire physical layer utilizing the concept of Code Division Multiple Access (CDMA).
|
229 |
Nonlinear optical response in grapheneJin, Xin 01 July 2015 (has links)
Graphene, a newly discovered carbon based material, is predicted to have a strong nonlinear electromagnetic response over a broad spectral range. Its unique carrier transport and terahertz properties have gained ample attention. Recently, it has been demonstrated that graphene has an extraordinary high nonlinear response with third-order susceptibility X3∼ 10-7 esu, which is 105 times higher than that of silicon.
In this thesis, we examine the nonlinear response of electron dynamics in graphene using the new derived optical Bloch equations. The thesis is divided into three sections. In the first part, we provide an overview of the derivation of the extended optical Bloch equations from the time-dependent Dirac equations. Then, we use these derived optical Bloch equations to demonstrate the coupling of light and field interaction in graphene, and the generation of the photon echo signals. Next, we describe the nonlinear response in graphene in terms of the current density, and we show that the enhanced interband dynamics reduces nonlinearity in the electric current. Finally, we illustrate that the strong interplay between the interband and intraband dynamics leads to large harmonic generations, where harmonics of up to 13th order are generated.
|
230 |
Automated image-based estimation of severity and cause of optic disc edemaAgne, Jason 15 December 2017 (has links)
Optic disc edema can arise from a variety of possible causes, some benign and some life threatening. For timely appropriate medical intervention, or to reduce patient anxiety in the event none is needed, it is critical that the cause and severity of a swollen optic disc be determined as soon as possible.
In this doctoral work, several algorithms are pieced together to determine the cause of optic disc edema. The process of determining the cause of swelling involves the extraction of several features, many of which are relatively new to the field of ophthalmology. Included among these are the shape of Bruch's membrane as found semi-automatically from SD-OCT images and the presence and orientation of folds in the retina, which are also most visible in SD-OCT images, and some selected features from 2D fundus images.
One specific cause of optic disc edema, called papilledema, is due to raised intracranial pressure. This, too, has a variety of possible causes, and often urgency (or severity) is rated by the Frisén scale, which is a 0-5 ordinal rating of severity (with 0 being normal). In the event papilledema is found to be the cause of swelling, this doctoral work also seeks to implement a more robust measurement of severity than the Frisén scale. Specifically, the total retinal volume (TRV) of the optic disc has been computed from SD-OCT images in other work. It is believed that the TRV serves as a more reliable means of assessing papilledema severity, as it is a continuous, repeatable measurement that is not subject to observer interpretation. As part of this doctoral work, the TRV is estimated from fundus images, which are faster and cheaper to obtain than SD-OCT images.
Thus, the aims of this thesis consist of finding and quantifying folds in the retina, using folds (and other features) to distinguish between the causes of a swollen optic disc, and, in the event an optic disc is swollen due to papilledema, to assess the severity of the swelling by estimating the TRV from fundus images. While the ultimate goal of this work would be to entirely diagnose a patient with optic disc edema from fundus images, that is beyond the scope of a single thesis. Thus the efforts here are to build towards that.
|
Page generated in 0.1338 seconds