• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1486
  • 473
  • 437
  • 372
  • 104
  • 74
  • 68
  • 34
  • 33
  • 32
  • 28
  • 26
  • 21
  • 18
  • 10
  • Tagged with
  • 3672
  • 1095
  • 749
  • 488
  • 460
  • 447
  • 419
  • 390
  • 389
  • 348
  • 344
  • 328
  • 320
  • 317
  • 316
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

Design of a Table-Driven Function Evaluation Generator Using Bit-Level Truncation Methods

Tseng, Yu-ling 30 August 2011 (has links)
Functional evaluation is one of key arithmetic operations in many applications including 3D graphics and stereo. Among various designs of hardware-based function evaluators, piecewise polynomial approximation methods are the most popular which interpolate the piecewise function curve in a sub-interval using polynomials with polynomial coefficients of each sub-interval stored in an entry of a ROM. The conventional piecewise methods usually determine the bit-widths of each ROM entry and multipliers and adders by analyzing the various error sources, including polynomial approximation errors, coefficient quantization errors, truncation errors of arithmetic operations, and the final rounding error. In this thesis, we present a new piecewise function evaluation design by considering all the error sources together. By combining all the error sources during the approximation, quantization, truncation and rounding, we can efficiently reduce the area cost of ROM and the corresponding arithmetic units. The proposed method is applied to piecewise function evaluators of both uniform and non-uniform segmentation.
312

Improved Bit-Level Truncation with Joint Error Analysis for Table-Based Function Evaluation

Lin, Shin-hung 12 September 2012 (has links)
Function evaluation is often used in many science and engineering applications. In order to reduce the computation time, different hardware implementations have been proposed to accelerate the speed of function evaluation. Table-based piecewise polynomial approximation is one of the major methods used in hardware function evaluation designs that require simple hardware components to achieve desired precision. Piecewise polynomial method approximates the original function values in each partitioned subinterval using low-degree polynomials with coefficients stored in look-up tables. Errors are introduced in the hardware implementations. Conventional error analysis in piecewise polynomial methods includes four types of error sources: polynomial approximation error, coefficient quantization error, arithmetic truncation error, and final rounding error. Typical design approach is to pre-allocated maximum allowable error budget for each individual hardware component so that the total error induced from these individual errors satisfies the bit accuracy. In this thesis, we present a new design approach by jointly considering the error sources in designing all the hardware components, including look-up tables and arithmetic units, so that the total area cost is reduced compared to the previously published designs.
313

Selection And Fusion Of Multiple Stereo Algorithms For Accurate Disparity Segmentation

Bilgin, Arda 01 November 2008 (has links) (PDF)
Fusion of multiple stereo algorithms is performed in order to obtain accurate disparity segmentation. Reliable disparity map of real-time stereo images is estimated and disparity segmentation is performed for object detection purpose. First, stereo algorithms which have high performance in real-time applications are chosen among the algorithms in the literature and three of them are implemented. Then, the results of these algorithms are fused to gain better performance in disparity estimation. In fusion process, if a pixel has the same disparity value in all algorithms, that disparity value is assigned to the pixel. Other pixels are labelled as unknown disparity. Then, unknown disparity values are estimated by a refinement procedure where neighbourhood disparity information is used. Finally, the resultant disparity map is segmented by using mean shift segmentation. The proposed method is tested in three different stereo data sets and several real stereo pairs. The experimental results indicate an improvement for the stereo analysis performance by the usage of fusion process and refinement procedure. Furthermore, disparity segmentation is realized successfully by using mean shift segmentation for detecting objects at different depth levels.
314

Steps towards the object semantic hierarchy

Xu, Changhai, 1977- 17 November 2011 (has links)
An intelligent robot must be able to perceive and reason robustly about its world in terms of objects, among other foundational concepts. The robot can draw on rich data for object perception from continuous sensory input, in contrast to the usual formulation that focuses on objects in isolated still images. Additionally, the robot needs multiple object representations to deal with different tasks and/or different classes of objects. We propose the Object Semantic Hierarchy (OSH), which consists of multiple representations with different ontologies. The OSH factors the problems of object perception so that intermediate states of knowledge about an object have natural representations, with relatively easy transitions from less structured to more structured representations. Each layer in the hierarchy builds an explanation of the sensory input stream, in terms of a stochastic model consisting of a deterministic model and an unexplained "noise" term. Each layer is constructed by identifying new invariants from the previous layer. In the final model, the scene is explained in terms of constant background and object models, and low-dimensional dynamic poses of the observer and objects. The OSH contains two types of layers: the Object Layers and the Model Layers. The Object Layers describe how the static background and each foreground object are individuated, and the Model Layers describe how the model for the static background or each foreground object evolves from less structured to more structured representations. Each object or background model contains the following layers: (1) 2D object in 2D space (2D2D): a set of constant 2D object views, and the time-variant 2D object poses, (2) 2D object in 3D space (2D3D): a collection of constant 2D components, with their individual time-variant 3D poses, and (3) 3D object in 3D space (3D3D): the same collection of constant 2D components but with invariant relations among their 3D poses, and the time-variant 3D pose of the object as a whole. In building 2D2D object models, a fundamental problem is to segment out foreground objects in the pixel-level sensory input from the background environment, where motion information is an important cue to perform the segmentation. Traditional approaches for moving object segmentation usually appeal to motion analysis on pure image information without exploiting the robot's motor signals. We observe, however, that the background motion (from the robot's egocentric view) has stronger correlation to the robot's motor signals than the motion of foreground objects. Based on this observation, we propose a novel approach to segmenting moving objects by learning homography and fundamental matrices from motor signals. In building 2D3D and 3D3D object models, estimating camera motion parameters plays a key role. We propose a novel method for camera motion estimation that takes advantage of both planar features and point features and fuses constraints from both homography and essential matrices in a single probabilistic framework. Using planar features greatly improves estimation accuracy over using point features only, and with the help of point features, the solution ambiguity from a planar feature is resolved. Compared to the two classic approaches that apply the constraint of either homography or essential matrix, the proposed method gives more accurate estimation results and avoids the drawbacks of the two approaches. / text
315

Segmentation of 2D-echocardiographic sequences using level-set constrained with shape and motion priors

Dietenbeck, Thomas 29 November 2012 (has links) (PDF)
The aim of this work is to propose an algorithm to segment and track the myocardium using the level-set formalism. The myocardium is first approximated by a geometric model (hyperquadrics) which allows to handle asymetric shapes such as the myocardium while avoiding a learning step. This representation is then embedded into the level-set formalism as a shape prior for the joint segmentation of the endocardial and epicardial borders. This shape prior term is coupled with a local data attachment term and a thickness term that prevents both contours from merging. The algorithm is validated on a dataset of 80 images at end diastolic and end systolic phase with manual references from 3 cardiologists. In a second step, we propose to segment whole sequences using motion information. To this end, we apply a level conservation constraint on the implicit function associated to the level-set and express this contraint as an energy term in a variational framework. This energy is then added to the previously described algorithm in order to constrain the temporal evolution of the contour. Finally the algorithm is validated on 20 echocardiographic sequences with manual references of 2 experts (corresponding to approximately 1200 images).
316

Fast Segmentation of Vessels in MR Liver Images using Patient Specific Models

Zaheer, Sameer 11 December 2013 (has links)
Image-guided therapies have the potential to improve the accuracy of treating liver cancer. In order to register intraoperative with preoperative liver images, joint segmentation and registration methods require fast segmentation of matching vessel centerlines. The algorithm presented in this thesis solves this problem by tracking the centerlines using ridge and cross-section information, and uses knowledge of the patient’s vasculature in the preoperative image to ensure correspondence. The algorithm was tested on three MR images of healthy volunteers and one CT image of a patient with liver cancer. Results show that in the context of join segmentation registration, if the registration error is less than 2.0mm, the average segmentation error is 0.73-1.68mm, with 88-100% of the vessels having an error less than a voxel length. For registration error less than 4.6mm, the average segmentation error is 1.17-2.11mm, with 79-98% of the vessels having an error less than a voxel length.
317

Fast Segmentation of Vessels in MR Liver Images using Patient Specific Models

Zaheer, Sameer 11 December 2013 (has links)
Image-guided therapies have the potential to improve the accuracy of treating liver cancer. In order to register intraoperative with preoperative liver images, joint segmentation and registration methods require fast segmentation of matching vessel centerlines. The algorithm presented in this thesis solves this problem by tracking the centerlines using ridge and cross-section information, and uses knowledge of the patient’s vasculature in the preoperative image to ensure correspondence. The algorithm was tested on three MR images of healthy volunteers and one CT image of a patient with liver cancer. Results show that in the context of join segmentation registration, if the registration error is less than 2.0mm, the average segmentation error is 0.73-1.68mm, with 88-100% of the vessels having an error less than a voxel length. For registration error less than 4.6mm, the average segmentation error is 1.17-2.11mm, with 79-98% of the vessels having an error less than a voxel length.
318

Graph-based Methods for Interactive Image Segmentation

Malmberg, Filip January 2011 (has links)
The subject of digital image analysis deals with extracting relevant information from image data, stored in digital form in a computer. A fundamental problem in image analysis is image segmentation, i.e., the identification and separation of relevant objects and structures in an image. Accurate segmentation of objects of interest is often required before further processing and analysis can be performed. Despite years of active research, fully automatic segmentation of arbitrary images remains an unsolved problem. Interactive, or semi-automatic, segmentation methods use human expert knowledge as additional input, thereby making the segmentation problem more tractable. The goal of interactive segmentation methods is to minimize the required user interaction time, while maintaining tight user control to guarantee the correctness of the results. Methods for interactive segmentation typically operate under one of two paradigms for user guidance: (1) Specification of pieces of the boundary of the desired object(s). (2) Specification of correct segmentation labels for a small subset of the image elements. These types of user input are referred to as boundary constraints and regional constraints, respectively. This thesis concerns the development of methods for interactive segmentation, using a graph-theoretic approach. We view an image as an edge weighted graph, whose vertex set is the set of image elements, and whose edges are given by an adjacency relation among the image elements. Due to its discrete nature and mathematical simplicity, this graph based image representation lends itself well to the development of efficient, and provably correct, methods. The contributions in this thesis may be summarized as follows: Existing graph-based methods for interactive segmentation are modified to improve their performance on images with noisy or missing data, while maintaining a low computational cost. Fuzzy techniques are utilized to obtain segmentations from which feature measurements can be made with increased precision. A new paradigm for user guidance, that unifies and generalizes regional and boundary constraints, is proposed. The practical utility of the proposed methods is illustrated with examples from the medical field.
319

Spending behaviour of visitors to the Klein Karoo National Arts Festival / Martinette Kruger

Kruger, Martinette January 2009 (has links)
The Klein Karoo National Arts Festival (KKNK) is one of the most popular arts festivals in South Africa, but ticket sales have alarmingly declined since 2005 resulting in the Festival already being in a decline phase of its product life cycle. This has a negative impact on the Festival's economic impact and future sustainability. It is therefore vital to increase the ticket sales in order for the Festival to maintain a steady growth rate. Market segmentation can assist the Festival's marketers/organisers to address this problem by identifying the high spending segment at the Festival since they stay longer and are keener to buy tickets supporting the Festivals shows/productions. Market segmentation is the process of dividing the festival market into smaller, more clearly defined groups that share similar, needs, wants and characteristics. The more detailed the knowledge of the needs and motives of potential visitors, the closer the Festival can get to a customised festival program creating greater satisfaction, long-term relationships, repeat visits and an increase in tickets supporting the shows/productions. The main purpose of this study was therefore to determine the spending behaviour of visitors the KKNK by means of establishing the determinants which influence visitor's expenditure and by applying expenditure-based segmentation in order to determine the high spending segment at the Festival. To determine the above goal, the study is divided into 2 articles. Research for both the articles was undertaken at the Festival and data obtained from 2005 to 2008 were used. Questionnaires were interview-administered and distributed randomly during the course of the Festival. In total 1940 questionnaires have been completed in the visitor survey since 2005. Article 1 is titled: "Socio-demographic and behavioural determinants of visitor spending at the Klein Karoo National Arts Festival." The main purpose of this article was to identify the various socio-demographic and behavioural determinants that influence visitor spending at the KKNK. This was done in order to determine which visitors spend most at the Festival and which determinants are most significant in determining their expenditure levels. A regression analysis was used as an instrument to achieve the mentioned goal. Results indicated that occupation, distance travelled, length of stay, the reason for attending the Festival and preferred type of shows/productions were significant determinants that influence the amount of money visitors spent at the Festival. These results generated strategic insights on marketing for the festival in order to increase visitor spending especially on purchasing more tickets for shows/productions. Article 2 is titled: "Expenditure-based segmentation of visitors at the Klein Karoo National Arts festival." The main purpose of this article was to apply expenditure-based segmentation to visitors at the KKNK in order to identify the high spending segment at the festival. An analysis of variance (ANOVA) was used to determine whether there were significant differences between the different expenditure groups. The Festival's market was divided into high, medium and low expenditure groups. Results revealed that the high spenders at the Festival were distinguishable from the low spenders based on their longer length of stay, older age, higher income, main reason to attend the Festival and preferred type of shows/productions. These results were used to compile a complete profile of the high spenders and how the Festival's appeal can be maximised in order to attract more high spenders. This research therefore revealed that certain socio-demographic determinants influence visitor's spending behaviour at the Klein Karoo National Arts Festival. There are further two distinct expenditure groups at the Festival, namely a high and low expenditure group. Knowledge of the determinants which influence visitor spending can be used in combination with the profile of the high spenders to maximise the Festival's appeal in order to attract more high spenders who buy tickets supporting the Festivals shows/productions. This will lead to an increase in ticket sales, a greater economic impact and ultimately to the continuous sustainability of the Klein Karoo National Arts Festival. / Thesis (M.A. (Tourism))--North-West University, Potchefstroom Campus, 2009.
320

Market segmentation of visitors to Aardklop National Arts Festival : a comparison of two methods / Karin Botha

Botha, Karin January 2009 (has links)
Thesis (M.A. (Tourism))--North-West University, Potchefstroom Campus, 2009.

Page generated in 0.0813 seconds