Spelling suggestions: "subject:"A²can"" "subject:"A²scan""
71 |
Automated Recognition and Classification of Coral Reefs on Seafloor off Kenting areaTsao, Shih-liang 01 September 2008 (has links)
The advantages that a side-scan sonar can offer include large-scale survey areas and high-resolution imagery which can provide the detection and positioning of underwater targets effectively. The purpose of image analysis, classification and positioning in this research was presented by the development of an automated recognition and classification system based on sonographs collected off Kenting area. Major components of the system include gray level co-occurrence matrix method, Baysian classification and cluster analysis.
The sonograph classified by the automated recognition and classification system was split into two stages. The first stage divided the seafloor into three categories:
(1) Rocky seafloor.
(2) Sandy seafloor.
(3) Acoustic shadow seafloor.
Based on the characteristics of the rocky seafloor, the rocky seafloor was subdivided into five types in the second stage:
(1) Flank reef and small independent reef.
(2) Smooth reef.
(3) Small coral on reef.
(4) Coral on independent reef.
(5) Large coral on reef.
Analysis and proof of the system was conducted by underwater photographs collected off Kenting area in August 4, to 6, 2004. The identification accuracy of the first stage can reach 93% in Shiniuzai area. The characteristic features selected in this research (i.e., entropy and homogeneity) for the classification of various coral reef seafloors was proved adequate and the results was described in map within a Geographic Information System in the second stage.
The results of this research illustrated that the rocky area identified in Shiniuzai was 98,863 m2. Due to image resolution restrictions, only 62,199 m2 of the total rocky area could be defined and classified properly. Among them, the flank reef and small independent reef covered an area of 15,954 m2 (26.3%); the smooth reef covered 3,133 m2 (5.0%); the small coral on reef covered 8,021 m2 (12.8%); the coral on independent reef covered 25,504 m2 (40.7%) and the large coral on reef covered 9,587 m2 (15.3%).
Key words:side scan sonar,coral reef,gray level co-occurrence matrix
|
72 |
Multiple Scan Trees Synthesis for Test Time/Data and Routing Length Reduction under Output ConstraintHung, Yu-Chen 29 July 2009 (has links)
A synthesis methodology for multiple scan trees that considers output pin limitation, scan chain routing length, test application time and test data compression rate simultaneously is proposed in this thesis. Multiple scan trees, also known as a scan forest, greatly reduce test data volume and test application time in SOC testing. However, previous research on scan tree synthesis rarely considered issues such as routing length and output port limitation, and hence created scan trees with a large number of scan output ports and excessively long routing paths. The proposed algorithm provides a mechanism that effectively reduces test time and test data volume, and routing length under output port constraint. As a result, no output compressors are required, which significantly reduce the hardware overhead.
|
73 |
Mit Muscheln gegen VirenKemter, Sirko 23 February 2005 (has links) (PDF)
No description available.
|
74 |
Improving encoding efficiency in test compression using sequential linear decompressors with retained free variablesMuthyala Sudhakar, Sreenivaas 23 October 2013 (has links)
This thesis proposes an approach to improve test compression using sequential linear decompressors by using retained free variables. Sequential linear decompressors are inherently efficient and attractive for encoding test vectors with high percentages of don't cares (i.e., test cubes). The encoding of these test cubes is done by solving a system of linear equations. In streaming decompression, a fixed number of free variables are used to encode each test cube. The non-pivot free variables used in Gaussian Elimination are wasted when the decompressor is reset before encoding the next test cube which is conventionally done to keep computational complexity manageable. In this thesis, a technique for retaining the non-pivot free variables when encoding one test cube and using them in encoding the subsequent test cubes is explored. This approach retains most of the non-pivot free variables with a minimal increase in runtime for solving the equations. Also, no additional control information is needed. Experimental results are presented showing that the encoding efficiency and hence compression, can be significantly boosted. / text
|
75 |
Complementary imaging for pavement cracking measurementsZhao, Zuyun 03 February 2015 (has links)
Cracking is a major pavement distress that jeopardizes road serviceability and traffic safety. Automated pavement distress survey (APDS) systems have been developed using digital imaging technology to replace human surveys for more timely and accurate inspections. Most APDS systems require special lighting devices to illuminate pavements and prevent shadows of roadside objects that distort cracks in the image. Most of the artificial lighting devices are laser based, which are either hazardous to unprotected people, or require dedicated power supplies on the vehicle. This study is aimed to develop a new imaging system that can scan pavement surface at highway speed and determine the severity level of pavement cracking without using any artificial lighting. The new system consists of dual line-scan cameras that are installed side by side to scan the same pavement area as the vehicle moves. Cameras are controlled with different exposure settings so that both sunlit and shadowed areas can be visible in two separate images. The paired images contain complementary details useful for reconstructing an image in which the shadows are eliminated. This paper intends to presents (1) the design of the dual line-scan camera system for a high-speed pavement imaging system that does not require artificial lighting, (2) a new calibration method for line-scan cameras to rectify and register paired images, which does not need mechanical assistance for dynamical scan, (3) a customized image-fusion algorithm that merges the multi-exposure images into one shadow-free image for crack detection, and (4) the results of the field tests on a selected road over a long period. / text
|
76 |
Improving encoding efficiency in test compression based on linear techniquesMuthyala Sudhakar, Sreenivaas 10 February 2015 (has links)
Sequential linear decompressors are widely used to implement test compression. Bits stored on the tester (called free variables) are assigned values to encode the test vectors such that when the tester data is decompressed, it reproduces the care bits in the test cube losslessly. In order to do this, the free variable dependence of the scan cells is obtained by symbolic simulation and a system of linear equations, one equation per care bit in a test cube, is solved to obtain the tester data. Existing techniques reset the decompressor after every test cube to avoid accumulating too many free variables, to keep the computation for encoding manageable. This leads to wastage of unused free variables and reduces the efficiency in encoding. Moreover, existing techniques preload the decompressor with free variables before scan shifting, which increases test time to help encode the early scan cells. This dissertation presents new approaches that improve the efficiency of the decompression process, achieving greater test compression and reducing test costs. The contributions of this dissertation include a low cost method to retain unused free variables while encoding a test cube and reuse them while encoding other test cubes with a minor increase in computational complexity. In addition, a test scheduling mechanism is described for system on chip (SoC) architectures that implements retaining unused free variables for SoCs without any hardware overhead and with little additional control. For testing 3D-ICs, a novel daisy-chain architecture for the sequential linear decompressor is proposed for sharing unused free variables across layers with a reduced number of TSVs (through silicon via) needed to transport test data (also called test elevators) to non-bottom layers. A scan feedforward technique is proposed which improves the free variable dependence of the scan cells, thereby increasing the probability of encoding of test cubes, especially when the early scan cells have a lot of specified bits, thereby avoiding the need for preloading the decompressor. Lastly, a feedforward/feedback mechanism in the scan chains for combinational linear decompressors is proposed which improves encoding flexibility and reduces tester data without pipelining the decompressor like the conventional methods, thereby reducing the test time. / text
|
77 |
High-quality dense stereo vision for whole body imaging and obesity assessmentYao, Ming, Ph. D. 12 August 2015 (has links)
The prevalence of obesity has necessitated developing safe and convenient tools for timely assessing and monitoring this condition for a broad range of population. Three-dimensional (3D) body imaging has become a new mean for obesity assessment. Moreover, it generates body shape information that is meaningful for fitness, ergonomics, and personalized clothing. In the previous work of our lab, we developed a prototype active stereo vision system that demonstrated a potential to fulfill this goal. But the prototype required four computer projectors to cast artificial textures on the body which facilitate the stereo-matching on texture-deficient images (e.g., skin). This decreases the mobility of the system when used to collect a large population data. In addition, the resolution of the generated 3D~images is limited by both cameras and projectors available during the project. The study reported in this dissertation highlights our continued effort in improving the capability of 3Dbody imaging through simplified hardware for passive stereo and advanced computation techniques.
The system utilizes high-resolution single-lens reflex (SLR) cameras, which became widely available lately, and is configured in a two-stance design to image the front and back surfaces of a person. A total of eight cameras are used to form four pairs of stereo units. Each unit covers a quarter of the body surface. The stereo units are individually calibrated with a specific pattern to determine cameras' intrinsic and extrinsic parameters for stereo matching. The global orientation and position of each stereo unit within a common world coordinate system is calculated through a 3Dregistration step. The stereo calibration and 3Dregistration procedures do not need to be repeated for a deployed system if the cameras' relative positions have not changed. This property contributes to the portability of the system, and tremendously alleviates the maintenance task. The image acquisition time is around two seconds for a whole-body capture. The system works in an indoor environment with a moderate ambient light.
Advanced stereo computation algorithms are developed by taking advantage of high-resolution images and by tackling the ambiguity problem in stereo matching. A multi-scale, coarse-to-fine matching framework is proposed to match large-scale textures at a low resolution and refine the matched results over higher resolutions. This matching strategy reduces the complexity of the computation and avoids ambiguous matching at the native resolution. The pixel-to-pixel stereo matching algorithm follows a classic, four-step strategy which consists of matching cost computation, cost aggregation, disparity computation and disparity refinement.
The system performance has been evaluated on mannequins and human subjects in comparison with other measurement methods. It was found that the geometrical measurements from reconstructed 3Dbody models, including body circumferences and whole volume, are highly repeatable and consistent with manual and other instrumental measurements (CV < 0.1$%, R2>0.99). The agreement of percent body fat (%BF) estimation on human subjects between stereo and dual-energy X-ray absorptiometry (DEXA) was found to be improved over the previous active stereo system, and the limits of agreement with 95% confidence were reduced by half. Our achieved %BF estimation agreement is among the lowest ones of other comparative studies with commercialized air displacement plethysmography (ADP) and DEXA. In practice, %BF estimation through a two-component model is sensitive to body volume measurement, and the estimation of lung volume could be a source of variation. Protocols for this type of measurement should still be created with an awareness of this factor. / text
|
78 |
Signal processing concepts for the assessment of coronary plaques with intravascular ultrasoundPerrey, Christian January 2005 (has links)
Zugl.: Bochum, Univ., Diss., 2005
|
79 |
Untersuchungen zur Anwendbarkeit des anatomischen M-Mode im Vergleich zu konventionellen Verfahren der Echokardiographie beim PferdStroth, Claudia. January 2006 (has links)
Freie Universiẗat, Diss., 2006--Berlin. / Dateiformat: zip, Dateien im PDF-Format.
|
80 |
Mixed-signal testing of integrated analog circuits and modulesLiu, Zhi-Hong. January 1999 (has links)
Thesis (Ph. D.)--Ohio University, March, 1999. / Title from PDF t.p.
|
Page generated in 0.0278 seconds