• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1489
  • 473
  • 437
  • 372
  • 104
  • 74
  • 68
  • 34
  • 33
  • 32
  • 28
  • 26
  • 21
  • 18
  • 11
  • Tagged with
  • 3676
  • 1096
  • 750
  • 488
  • 460
  • 450
  • 419
  • 390
  • 389
  • 348
  • 346
  • 328
  • 321
  • 317
  • 316
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Segment Congruence Analysis: An Information Theoretic Approach

Hosseini-Chaleshtari, Jamshid 01 January 1987 (has links)
When there are several possible segmentation variables, marketers must investigate the ramifications of their potential interactions. These include their mutual association, the identification of the best (the distinguished) segmentation variable and its predictability by a set of descriptor variables, and the structure of the multivariate system(s) obtained from the segmentation and descriptor variables. This procedure has been defined as segment congruence analysis (SCA). This study utilizes the information theoretic and the log-linear/logit approaches to address a variety of research questions in segment congruence analysis. It is shown that the information theoretic approach expands the scope of SCA and offers some advantages over traditional methods. Data obtained from a survey conducted by the Bonneville Power Administration (BPA) and Northwest utilities is used to demonstrate the information theoretic and the log-linear/logit approaches and compare these two methods. The survey was designed to obtain information on energy consumption habits, attitudes toward selected energy issues, and the conservation measures utilized by the residents in the Pacific Northwest. The analyses are performed in two distinct phases. Phase I includes assessment of mutual association among segmentation variables and four methods (based on different information theoretic functions) for identifying candidates for the distinguished variable. Phase II addresses the selection and analysis of the distinguished variable. This variable is selected either a priori or by assessment of its predictability from (segmentation or exogenous) descriptor variables. The relations between the distinguished variable and the descriptor variables are further analyzed by examining the predictability issue in greater detail and by evaluating structural models of the multivariate systems. The methodological conclusions of this study are that the information theoretic and log-linear methods have deep similarities. The analyses produced intuitively plausible results. In Phase I, energy related awareness, behavior, perceptions, attitudes, and electricity consumption were identified as candidate segmentation variables. In Phase II, using exogenous descriptor variables, electricity consumption was selected as the distinguished variable. The analysis of this variable indicated that the demographic factors, type of dwelling, and geoclimatic environment are among the most important determinants of electricity consumption.
52

Parallelized Ray Casting Volume Rendering and 3D Segmentation with Combinatorial Map

Huang, Wenhan 27 April 2016 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Rapid development of digital technology has enabled the real-time volume rendering of scientific data, in particular large microscopy data sets. In general, volume rendering techniques project 3D discrete datasets onto 2D image planes, with the generated views being transparent and having designated color that is not necessarily "real" color. Volume rendering techniques initially require designating a processing method that assigns different colors and transparency coefficients to different regions. Then based on the "viewer" and the dataset "location," the method will determine the final imaging effect. Current popular techniques include ray casting, splatting, shear warp, and texture-based volume rendering. Of particular interest is ray casting as it permits the display of objects interior to a dataset as well as render complex objects such as skeleton and muscle. However, ray casting requires large memory and suffers from longer processing time. One way to address this is to parallelize its implementation on programmable graphic processing hardware. This thesis proposes a GPU based ray casting algorithm that can render a 3D volume in real-time application. In addition, to implementing volume rendering techniques on programmable graphic processing hardware to decrease execution times, 3D image segmentation techniques can also be utilized to increase execution speeds. In 3D image segmentation, the dataset is partitioned into smaller sized regions based on specific properties. By using a 3D segmentation method in volume rendering applications, users can extract individual objects from within the 3D dataset for rendering and further analysis. This thesis proposes a 3D segmentation algorithm with combinatorial map that can be parallelized on graphic processing units.
53

An Investigation of the Effect of Segmentation on Immediate and Delayed Knowledge Transfer in a Multimedia Learning Environment

Mariano, Gina 10 April 2008 (has links)
The purpose of this study was to determine the effects of segmentation on immediate and delayed recall and transfer in a multimedia learning environment. The independent variables of segmentation and non-segmentation, and immediate and delayed assessments were manipulated to assess the effects of segmentation on the participants' ability to recall and transfer information from the multimedia tutorial. Data was analyzed using a 2X2 factorial design. The results of this study found that segmentation of multimedia tutorials did not result in significant differences in recall or transfer. The results also revealed that the time period between when a tutorial was viewed and when the recall and transfer assessments were taken did significantly affect participants ability to recall and transfer information. / Ph. D.
54

Extraction of 3D Object Representations from a Single Range Image

Taha, Hussein Saad 28 January 2000 (has links)
The main goal of this research is the automatic construction of a computer model of 3D solid objects from a single range image. This research has many real world applications, including robotic environments and the inspection of industry parts. The most common methods for 3D-object extraction are based on stereo reconstruction and structured light analysis. The first approach encounters the difficulty of finding a correspondence of points between two images for the same scene, which involves intensive computations. The latter, on the other hand, has limitations and difficulties in object extraction, namely, inferring information about 3D objects from a 2D image. In addition, research in 3D-object extraction up to this point has lacked a thorough treatment of overlapped (occluded) objects. This research has resulted in a system that can extract multiple polyhedral objects from a single range image. The system consists of several parts: edge detection, segmentation, initial vertex extraction, occlusion detection, grouping faces into objects, and object representation. The problem is difficult especially when occluded objects are present. The system that has been developed separates occluded objects by combining evidence of several types. In the edge detection algorithm, noise reduction for range images is treated first by implementing a statistically robust technique based on the least median of squares. Three approaches to edge detection are presented. One that detects change in gradient orientation is a novel approach, which is implemented in the algorithm due to its superior performance, and the other two are extensions of work by other researchers. In general, the performance of these edge detection methods is considerably better than many others in the domain of range image segmentation. A hybrid approach (region-edge based) is introduced to achieve a robust solution for a single range image segmentation. The segmentation process depends on collaborating edge and region techniques where they give complementary information about the scene. Region boundaries are improved using iterative refinement. A novel approach for initial vertex extraction is presented to find the vertices of the polyhedral objects. The 3D vertex locations for the objects are obtained through an analysis of two-dimensional (2D) region shape and corner proximity, and the vertices of the polyhedra are extracted from the individual faces. There are two major approaches for dealing with occlusion. The first is an automatic identification of layers of 3D solid objects within a single range image. In this novel approach, a histogram of the distance values from a given range image is clustered into separate modes. Ideally, each mode of the histogram will be associated with one or more surfaces having approximately the same distance from the sensor. This approach works well when the objects are lying at different distances from the sensor, but when two or more objects are overlapped and lying at the same distance from the sensor, this approach has difficulty in detecting occlusion. The second approach for occlusion detection is considered the major contribution of this work. It detects occlusion of 3D solid objects from a single range image using multiple sources of evidence. This technique is based on detecting occlusion that may be present between each pair of adjacent faces associated with the estimated vertices of the 3D objects. This approach is not based on vertex and line labeling as other approaches are; it utilizes the topology and geometrical information of the 3D objects. After occlusion detection, faces are grouped into objects according to their adjacency relations and the absence or presence of occlusion between them. The initial vertex estimates are improved significantly through a global optimization procedure. Finally, models of the 3D objects are represented using the boundary representation technique that makes use of the region adjacency graph (RAG) paradigm. The experimental results of this research were obtained using real range images obtained from the CESAR lab at Oak Ridge National Laboratory. These images were obtained using a Perceptron laser range finder. These images contain single and multiple polyhedral objects, and they have a size of 512x512 pixels and a quantization of 12 bits per pixel. A quantitative evaluation of the construction algorithms is given. Part of this evaluation depends on the comparison between the results of the proposed segmentation technique and the ground truth database for these range images. The other part is to compare the results of the implemented algorithms with the results of other researchers, and it is found that the system developed here exhibits better performance in terms of the accuracy of the boundaries for the regions of the segmented images. A subjective comparison of the new edge detection methods with some traditional approaches is also provided for the set of range images. An evaluation of the new approach to occlusion detection is also presented. A recommendation for future work is to extend this system to involve images contain objects with curved surfaces. With some modifications to the multiple evidence-based approach of occlusion detection, the curved objects could be addressed. In addition, the model could be updated to include representation of the hidden surfaces for the 3D objects. This could be achieved by using multiple views for the same scene, or through assumptions such as symmetry to infer the shape of the hidden portion of the objects. / Ph. D.
55

Expressive Forms of Topic Modeling to Support Digital Humanities

Gad, Samah Hossam Aldin 15 October 2014 (has links)
Unstructured textual data is rapidly growing and practitioners from diverse disciplines are expe- riencing a need to structure this massive amount of data. Topic modeling is one of the most used techniques for analyzing and understanding the latent structure of large text collections. Probabilistic graphical models are the main building block behind topic modeling and they are used to express assumptions about the latent structure of complex data. This dissertation address four problems related to drawing structure from high dimensional data and improving the text mining process. Studying the ebb and flow of ideas during critical events, e.g. an epidemic, is very important to understanding the reporting or coverage around the event or the impact of the event on the society. This can be accomplished by capturing the dynamic evolution of topics underlying a text corpora. We propose an approach to this problem by identifying segment boundaries that detect significant shifts of topic coverage. In order to identify segment boundaries, we embed a temporal segmentation algorithm around a topic modeling algorithm to capture such significant shifts of coverage. A key advantage of our approach is that it integrates with existing topic modeling algorithms in a transparent manner; thus, more sophisticated algorithms can be readily plugged in as research in topic modeling evolves. We apply this algorithm to studying data from the iNeighbors system, and apply our algorithm to six neighborhoods (three economically advantaged and three economically disadvantaged) to evaluate differences in conversations for statistical significance. Our findings suggest that social technologies may afford opportunities for democratic engagement in contexts that are otherwise less likely to support opportunities for deliberation and participatory democracy. We also examine the progression in coverage of historical newspapers about the 1918 influenza epidemic by applying our algorithm on the Washington Times archives. The algorithm is successful in identifying important qualitative features of news coverage of the pandemic. Visually convincing results of data mining algorithms and models is crucial to analyzing and driving conclusions from the algorithms. We develop ThemeDelta, a visual analytics system for extracting and visualizing temporal trends, clustering, and reorganization in time-indexed textual datasets. ThemeDelta is supported by a dynamic temporal segmentation algorithm that integrates with topic modeling algorithms to identify change points where significant shifts in topics occur. This algorithm detects not only the clustering and associations of keywords in a time period, but also their convergence into topics (groups of keywords) that may later diverge into new groups. The visual representation of ThemeDelta uses sinuous, variable-width lines to show this evolution on a timeline, utilizing color for categories, and line width for keyword strength. We demonstrate how interaction with ThemeDelta helps capture the rise and fall of topics by analyzing archives of historical newspapers, of U.S. presidential campaign speeches, and of social messages collected through iNeighbors. ThemeDelta is evaluated using a qualitative expert user study involving three researchers from rhetoric and history using the historical newspapers corpus. Time and location are key parameters in any event; neglecting them while discovering topics from a collection of documents results in missing valuable information. We propose a dynamic spatial topic model (DSTM), a true spatio-temporal model that enables disaggregating a corpus's coverage into location-based reporting, and understanding how such coverage varies over time. DSTM naturally generalizes traditional spatial and temporal topic models so that many existing formalisms can be viewed as special cases of DSTM. We demonstrate a successful application of DSTM to multiple newspapers from the Chronicling America repository. We demonstrate how our approach helps uncover key differences in the coverage of the flu as it spread through the nation, and provide possible explanations for such differences. Major events that can change the flow of people's lives are important to predict, especially when we have powerful models and sufficient data available at our fingertips. The problem of embedding the DSTM in a predictive setting is the last part of this dissertation. To predict events and their locations across time, we present a predictive dynamic spatial topic model that can predict future topics and their locations from unseen documents. We showed the applicability of our proposed approach by applying it on streaming tweets from Latin America. The prediction approach was successful in identify major events and their locations. / Ph. D.
56

Segmentation d'objets mobiles par fusion RGB-D et invariance colorimétrique / Mooving objects segmentation by RGB-D fusion and color constancy

Murgia, Julian 24 May 2016 (has links)
Cette thèse s'inscrit dans un cadre de vidéo-surveillance, et s'intéresse plus précisément à la détection robustesd'objets mobiles dans une séquence d'images. Une bonne détection d'objets mobiles est un prérequis indispensableà tout traitement appliqué à ces objets dans de nombreuses applications telles que le suivi de voitures ou depersonnes, le comptage des passagers de transports en commun, la détection de situations dangereuses dans desenvironnements spécifiques (passages à niveau, passages piéton, carrefours, etc.), ou encore le contrôle devéhicules autonomes. Un très grand nombre de ces applications utilise un système de vision par ordinateur. Lafiabilité de ces systèmes demande une robustesse importante face à des conditions parfois difficiles souventcausées par les conditions d'illumination (jour/nuit, ombres portées), les conditions météorologiques (pluie, vent,neige) ainsi que la topologie même de la scène observée (occultations). Les travaux présentés dans cette thèsevisent à améliorer la qualité de détection d'objets mobiles en milieu intérieur ou extérieur, et à tout moment de lajournée.Pour ce faire, nous avons proposé trois stratégies combinables :i) l'utilisation d'invariants colorimétriques et/ou d'espaces de représentation couleur présentant des propriétésinvariantes ;ii) l'utilisation d'une caméra stéréoscopique et d'une caméra active Microsoft Kinect en plus de la caméra couleurafin de reconstruire l'environnement 3D partiel de la scène, et de fournir une dimension supplémentaire, à savoirune information de profondeur, à l'algorithme de détection d'objets mobiles pour la caractérisation des pixels ;iii) la proposition d'un nouvel algorithme de fusion basé sur la logique floue permettant de combiner les informationsde couleur et de profondeur tout en accordant une certaine marge d'incertitude quant à l'appartenance du pixel aufond ou à un objet mobile. / This PhD thesis falls within the scope of video-surveillance, and more precisely focuses on the detection of movingobjects in image sequences. In many applications, good detection of moving objects is an indispensable prerequisiteto any treatment applied to these objects such as people or cars tracking, passengers counting, detection ofdangerous situations in specific environments (level crossings, pedestrian crossings, intersections, etc.), or controlof autonomous vehicles. The reliability of computer vision based systems require robustness against difficultconditions often caused by lighting conditions (day/night, shadows), weather conditions (rain, wind, snow...) and thetopology of the observed scene (occultation...).Works detailed in this PhD thesis aim at reducing the impact of illumination conditions by improving the quality of thedetection of mobile objects in indoor or outdoor environments and at any time of the day. Thus, we propose threestrategies working as a combination to improve the detection of moving objects:i) using colorimetric invariants and/or color spaces that provide invariant properties ;ii) using passive stereoscopic camera (in outdoor environments) and Microsoft Kinect active camera (in outdoorenvironments) in order to partially reconstruct the 3D environment, providing an additional dimension (a depthinformation) to the background/foreground subtraction algorithm ;iii) a new fusion algorithm based on fuzzy logic in order to combine color and depth information with a certain level ofuncertainty for the pixels classification.
57

Expenditure-based segmentation of anglers : and how the expenditure can be increased.

Oskarsson, Sara January 2014 (has links)
No description available.
58

Reeb Graph Modeling of 3-D Animated Meshes and its Applications to Shape Recognition and Dynamic Compression / Modélisation des maillages animés 3D par Reeb Graph et son application à l'indexation et la compression

Hachani, Meha 19 December 2015 (has links)
Le développement fulgurant de réseaux informatiques, a entraîné l'apparition de diverses applications multimédia qui emploient des données 3D dans des multiples contextes. Si la majorité des travaux de recherche sur ces données s'est appuyées sur les modèles statiques, c'est à présent vers Les modèles dynamiques de maillages qu'il faut se tourner. Cependant, le maillage triangulaire est une représentation extrinsèque, sensible face aux différentes transformations affines et isométriques. Par conséquent, il a besoin d'un descripteur structurel intrinsèque. Pour relever ces défis, nous nous concentrons sur la modélisation topologique intrinsèque basée sur les graphes de Reeb. Notre principale contribution consiste à définir une nouvelle fonction continue basée sur les propriétés de diffusion de la chaleur. Ce dernier est calculé comme la distance de diffusion d'un point de la surface aux points localisés aux extrémités du modèle 3D qui représentent l'extremum locales de l'objet . Cette approche de construction de graph de Reeb peut être extrêmement utile comme descripteur de forme locale pour la reconnaissance de forme 3D. Il peut également être introduit dans un système de compression dynamique basée sur la segmentation.Dans une deuxième partie, nous avons proposé d'exploiter la méthode de construction de graphe de Reeb dans un système de reconnaissance de formes 3D non rigides. L'objectif consiste à segmenter le graphe de Reeb en cartes de Reeb définis comme cartes de topologie contrôlée. Chaque carte de Reeb est projetée vers le domaine planaire canonique. Ce dépliage dans le domaine planaire canonique introduit des distorsions d'aire et d'angle. En se basant sur une estimation de distorsion, l'extraction de vecteur caractéristique est effectuée. Nous calculons pour chaque carte un couple de signatures, qui sera utilisé par la suite pour faire l'appariement entre les cartes de Reeb.Dans une troisième partie, nous avons proposé de concevoir une technique de segmentation, des maillages dynamiques 3D. Le processus de segmentation est effectué en fonction des valeurs de la fonction scalaire proposée dans la première partie. Le principe consiste à dériver une segmentation purement topologique qui vise à partitionner le maillage en des régions rigides tout en estimant le mouvement de chaque région au cours du temps. Pour obtenir une bonne répartition des sommets situés sur les frontières des régions, nous avons proposé d'ajouter une étape de raffinement basée sur l'information de la courbure. Chaque limite de région est associée à une valeur de la fonction qui correspond à un point critique. L'objectif visé est de trouver la valeur optimale de cette fonction qui détermine le profil des limites. La technique de segmentation développée est exploitée dans un système de compression sans perte des maillages dynamiques 3D. Il s'agit de partitionner la première trame de la séquence. Chaque région est modélisée par une transformée affine et leurs poids d'animation associés. Le vecteur partition, associant à chaque sommet l'index de la région auquel il appartient, est compressé par un codeur arithmétique. Les deux ensembles des transformées affines et des poids d'animation sont quantifiés uniformément et compressés par un codeur arithmétique. La première trame de la séquence est compressée en appliquant un codeur de maillage statique. L a quantification de l'erreur de prédiction temporelle est optimisée en minimisant l'erreur de reconstruction. Ce processus est effectué sur les données de l'erreur de prédiction, qui est divisé en 3 sous-bandes correspondant aux erreurs de prédiction des 3 coordonnées x, y et z. Le taux de distorsion introduit est déterminé en calculant le pas de quantification, pour chaque sous-bande, afin d'atteindre le débit binaire cible. / In the last decade, the technological progress in telecommunication, hardware design and multimedia, allows access to an ever finer three-dimensional (3-D) modeling of the world. While most researchers have focused on the field of 3D objects, now it is necessary to turn to 3D time domain (3D+t). 3D dynamic meshes are becoming a media of increasing importance. This 3D content is subject to various processing operations such as indexation, segmentation or compression. However, surface mesh is an extrinsic shape representation. Therefore, it suffers from important variability under different sampling strategies and canonical shape-non-altering surface transformations, such as affine or isometric transformations. Consequently it needs an intrinsic structural descriptor before being processed by one of the aforementioned processing operations. The research topic of this thesis work is the topological modeling based on Reeb graphs. Specifically, we focus on 3D shapes represented by triangulated surfaces. Our objective is to propose a new approach, of Reeb graph construction, which exploits the temporal information. The main contribution consists in defining a new continuous function based on the heat diffusion properties. The latter is computed from the discrete representation of the shape to obtain a topological structure.The restriction of the heat kernel to temporal domain makes the proposed function intrinsic and stable against transformation. Due to the presence of neighborhood information in the heat kernel, the proposed Reeb Graph construction approach can be extremely useful as local shape descriptor for non-rigid shape retrieval. It can also be introduced into a segmentation-based dynamic compression scheme in order to infer the functional parts of a 3D shape by decomposing it into parts of uniform motion. In this context, we apply the concept of Reeb graph in two widely used applications which are pattern recognition and compression.Reeb graph has been known as an interesting candidate for 3D shape intrinsic structural representation. we propose a 3D non rigid shape recognition approach. The main contribution consists in defining a new scalar function to construct the Reeb graph. This function is computed based on the diffusion distance. For matching purpose, the constructed Reeb graph is segmented into Reeb charts, which are associated with a couple of geometrical signatures. The matching between two Reeb charts is performed based on the distances between their corresponding signatures. As a result, the global similarity is estimated based on the minimum distance between Reeb chart pairs. Skeletonisation and segmentation tasks are closely related. Mesh segmentation can be formulated as graph clustering. First we propose an implicit segmentation method which consists in partitioning mesh sequences, with constant connectivity, based on the Reeb graph construction method. Regions are separated according to the values of the proposed continuous function while adding a refinement step based on curvature and boundary information.Intrinsic mesh surface segmentation has been studied in the field of computer vision, especially for compression and simplification purposes. Therefore we present a segmentation-based compression scheme for animated sequences of meshes with constant connectivity. The proposed method exploits the temporal coherence of the geometry component by using the heat diffusion properties during the segmentation process. The motion of the resulting regions is accurately described by 3D affine transforms. These transforms are computed at the first frame to match the subsequent ones. In order to improve the performance of our coding scheme, the quantization of temporal prediction errors is optimized by using a bit allocation procedure. The objective aimed at is to control the compression rate while minimizing the reconstruction error.
59

Stroke Lesion Segmentation for tDCS

Naeslund, Elin January 2011 (has links)
Transcranial direct current stimulation (tDCS), together with speech therapy, is known to relieve the symptoms of aphasia. Knowledge about amount of current to apply and stimulation location is needed to ensure the best result possible. Segmented tissues are used in a finite element method (FEM) simulation and by creating a mesh, information to guide the stimulation is gained. Thus, correct segmentation is crucial. Manual segmentation is known to produce the most accurate result, although it is not useful in the clinical setting since it currently takes weeks to manually segment one image volume. Automatic segmentation is faster, although both acute stroke lesions and nectrotic stroke lesions are known to cause problems. Three automatic segmentation routines are evaluated using default settings and two sets of tissue probability maps (TPMs). Two sets of stroke patients are used; one set with acute stroke lesions (which can only be seen as a change in image intensity) and one set with necrotic stroke lesions (which are cleared out and filled with cerebrospinal fluid (CSF)). The original segmentation routine in SPM8 does not produce correct segmentation result having problems with lesion and paralesional areas. Mohamed Seghier’s ALI, an automatic segmentation routine developed to handle lesions as an own tissue class, does not produce satisfactory result. The new segmentation routine in SPM8 produces the best results, especially if Chris Rorden’s (professor at The Georgia Institute of Technology) improved TPMs are used. Unfortunately, the layer of CSF is not continuous. The segmentation result can still be used in a FEM simulation, although the result from the simulatation will not be ideal. Neither of the automatic segmentation routines evaluated produce an acceptable result (see Figure 5.7) for stroke patients. Necrotic stroke lesions does not affect the segmentation result as much as the acute dito, especially if there is only a small amount of scar tissue present at the lesion site. The new segmentation routine in SPM8 has the brightest future, although changes need to be made to ensure anatomically correct segmentation results. Post-processing algorithms, relying on morphological prior constraints, can improve the segmentation result further.
60

Segmentation analysis for eating oil market- by health functions eating oil products

Chang, Chih-Yue 11 August 2004 (has links)
By improved manufacturing skill, daily used oil has become more health to use and good for people. Every company increasing kind of health functions for oil product in order to become popular oil in the market. So every company engage in founding their major completive niche in the daily eating oil market. To achieve these objectives, the study was investigated by questionnaire survey with random sampling and conducted focus group approach to understand implicit important items when customer purchased daily eating oil behavior. And used these data analysis with Spss11.0, then extracted seven factors from life-style variable and three factors from product benefit variables by factor analysis. In addition, using four consumer groups were separated out by clustering analysis. Furthermore, the study analyzed consumer¡¦s characteristic according to significant variables of demographic variables. Lastly, we examined four-cluster customer into four-character target market. And expression each kind of target marketing strategic approach and manufacture can develop related field in the future.

Page generated in 0.088 seconds