• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1497
  • 473
  • 437
  • 372
  • 104
  • 76
  • 68
  • 34
  • 33
  • 32
  • 28
  • 26
  • 21
  • 19
  • 18
  • Tagged with
  • 3698
  • 1099
  • 757
  • 489
  • 461
  • 457
  • 421
  • 390
  • 389
  • 349
  • 348
  • 328
  • 326
  • 318
  • 318
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
561

Consumer Behaviors of Taiwan¡¦s Wine Market -High and Low Involvement of Wine

Chiang, Pei-fang 17 July 2008 (has links)
The past decade has seen a steady increase in wine consumption in Taiwan. With the increase comes the need to understand how consumers choose wine. Market segmentation is the principal concept in this study, and the aim is to find consumer behaviors of wine in different segments. A range of behavioral and demographic information was collected by the survey of questionnaire. In particular, participants were asked to indicate on average how many bottles of wine they purchased and how many hours they spent to acquire wine information per month. The above information forms the basis of segmenting, together with other behavioral information, combines to form profiles of consumers with high and low involvement of wine. In the process, cluster analysis was performed to divide the respondents into two groups, cross analysis and chi-square test were used to evaluate the significant differences on behavioral variables, and factor analysis and one-way ANOVA were applied to find which wine characteristics were important toward each segment. The result showed that significant differences were found between two groups on the choices of wine types and wine outlets, the way of acquiring wine information, the factors effecting to choose wines, and the characteristics of wine they emphasized.
562

Using Text mining Techniques for automatically classifying Public Opinion Documents

Chen, Kuan-hsien 19 January 2009 (has links)
In a democratic society, the number of public opinion documents increase with days, and there is a pressing need for automatically classifying these documents. Traditional approach for classifying documents involves the techniques of segmenting words and the use of stop words, corpus, and grammar analysis for retrieving the key terms of documents. However, with the emergence of new terms, the traditional methods that leverage dictionary or thesaurus may incur lower accuracy. Therefore, this study proposes a new method that does not require the prior establishment of a dictionary or thesaurus, and is applicable to documents written in any language and documents containing unstructured text. Specifically, the classification method employs genetic algorithm for achieving this goal. In this method, each training document is represented by several chromosomes, and based on the gene values of these chromosomes, the characteristic terms of the document are determined. The fitness function, which is required by the genetic algorithm for evaluating the fitness of an evolved chromosome, considers the similarity to the chromosomes of documents of other types. This study used data FAQ of e-mail box of Taipei city mayor for evaluating the proposed method by varying the length of documents. The results show that the proposed method achieves the average accuracy rate of 89%, the average precision rate of 47%, and the average recall rate of 45%. In addition, F-measure can reach up to 0.7. The results confirms that the number of training documents, content of training documents, the similarity between the types of documents, and the length of the documents all contribute to the effectiveness of the proposed method.
563

The Study of the Medical Aesthetic Market Effected the Strategic Decision of the Purchasing Behavior for Consumers

Wu, Shen-chung 25 August 2009 (has links)
Taiwan National Health Insurance system by strictly controlling the total amount of health care, the combination of "Medical" and " cosmeic profession " of two Medical cosmeceutical business, medical institutions have been regarded as the most worthy of our efforts to the develop ment of one of the items at their own expense. In the past, Dermatological clinics and Plastic surgery specialist had the advantages of medical profession;"However, without strict legal restrictions, many medical cosmeceutical have mushroomed rapidly of the setting up, making the medical market is fiercely competitive. In this study, Kaohsiung area for more than 18 years of age to study the population of the research parent body, in the consumer behavior theory and the basis of related literature, the use of EBM model structure, using purposive sampling survey method,Issued a total of 400 questionnaires were obtained 375 valid samples, survey data will be analyzed through statistical methods to explore the demographic variables of consumers, the medical knowledge of beauty, consumer characteristics, the purchase decision-making, purchase of consumer behavior analysis, and differences between single-factor analysis of variables, found that the majority Kaohsiung area women, unmarried women and young mature women,Service industry or business people, college education, monthly income of 2 - 4 million is to become the mainstream of medical cosmetic treatment; accept satisfactory cosmetic medical treatment have 78% will come back to hospital for treatment or to introduce relatives and friends therapy; no accepted medical cosmetic experience in the survey of 184 people, 167 people will consider accepting a medical cosmeic treatment, these are exciting message. This study was designed to explore the consumer demand for medical cosmetic treatments, in the hope of providing for medical cosmetic in the foreseeable future in the marketing strategy formulation and marketing of develop the reference, Blue Ocean strategic thinking in order t o find out the value and innovation in medical cosmetic business model, offering consumers value for money and high quality medical cosmetic, to find their own market position, and avoid vicious competition, and create consumer demand and expansion of medical cosmetic market.
564

Design and Analysis of Table-based Arithmetic Units with Memory Reduction

Chen, Kun-Chih 01 September 2009 (has links)
In many digital signal processing applications, we often need some special function units which can compute complicated arithmetic functions such as reciprocal and logarithm. Conventionally, table-based arithmetic design strategy uses lookup tables to implement these kinds of function units. However, the table size will increase exponentially with respect to the required precision. In this thesis, we propose two methods to reduce the table size: bottom-up non-uniform segmentation and the approach which merges uniform piecewise interpolation and Newton-Raphson method. Experimental results show that we obtain significant table sizes reduction in most cases.
565

An Enhanced Conditional Random Field Model for Chinese Word Segmentation

Huang, Jhao-ming 03 February 2010 (has links)
In Chinese language, the smallest meaningful unit is a word which is composed of a sequence of characters. A Chinese sentence is composed of a sequence of words without any separation between them. In the area of information retrieval or data mining, the segmentation of a sequence of Chinese characters should be done before anyone starts to use these segments of characters. The process is called the Chinese word segmentation. The researches of Chinese word segmentation have been developed for many years. Although some recent researches have achieved very high performance, the recall of those words that are not in the dictionary only achieves sixty or seventy percent. An approach described in this paper makes use of the linear-chain conditional random fields (CRFs) to have a more accurate Chinese word segmentation. The discriminatively trained model that uses two of our proposed feature templates for deciding the boundaries between characters is used in our study. We also propose three other methods, which are the duplicate word repartition, the date representation repartition, and the segment refinement, to enhance the accuracy of the processed segments. In the experiments, we use several different approaches for testing and compare the results with those proposed by Li et al. and Lau and King based on three different Chinese word corpora. The results prove that the improved feature template which makes use of the information of prefix and postfix could increase both the recall and the precision. For example, the F-measure reaches 0.964 in the MSR dataset. By detecting repeat characters, the duplicated characters could also be better repartitioned without using extra resources. In the representation of date, the wrongly segmented date could be better repartitioned by using the proposed method which deals with numbers, date, and measure words. If a word is segmented differently from that of the corresponding standard segmentation corpus, a proper segment could be produced by repartitioning the assembled segment which is composed of the current segment and the adjacent segment. In the area of using the conditional random fields for Chinese word segmentation, we have proposed a feature template for better result and three methods which focus on other specific segmentation problems.
566

Segmentation in a Distributed Real-Time Main-Memory Database

Mathiason, Gunnar January 2002 (has links)
<p>To achieve better scalability, a fully replicated, distributed, main-memory database is divided into subparts, called segments. Segments may have individual degrees of redundancy and other properties that can be used for replication control. Segmentation is examined for the opportunity of decreasing replication effort, lower memory requirements and decrease node recovery times. Typical usage scenarios are distributed databases with many nodes where only a small number of the nodes share information. We present a framework for virtual full replication that implements segments with scheduled replication of updates between sharing nodes.</p><p>Selective replication control needs information about the application semantics that is specified using segment properties, which includes consistency classes and other properties. We define a syntax for specifying the application semantics and segment properties for the segmented database. In particular, properties of segments that are subject to hard real-time constraints must be specified. We also analyze the potential improvements for such an architecture.</p>
567

Optical Flow Based Structure from Motion

Zucchelli, Marco January 2002 (has links)
No description available.
568

Segmentation and Alignment of 3-D Transaxial Myocardial Perfusion Images and Automatic Dopamin Transporter Quantification / Segmentering och uppvinkling av tredimensionella, transaxiella myokardiska perfusionsbilder och automatisk dopaminreceptorkvantifiering

Bergnéhr, Leo January 2008 (has links)
<p>Nukleärmedicinska bilder som exempelvis SPECT (Single Photon Emission Tomogra-phy) är en bildgenererande teknik som ofta används i många applikationer vid mätning av fysiologiska egenskaper i den mänskliga kroppen. En vanlig sorts undersökning som använder sig av SPECT är myokardiell perfusion (blodflöde i hjärtvävnaden), som ofta används för att undersöka t.ex. en möjlig hjärtinfarkt. För att göra det möjligt för läkare att ställa en kvalitativ diagnos baserad på dessa bilder, måste bilderna först segmenteras och roteras av en biomedicinsk analytiker. Detta utförs på grund av att hjärtat hos olika patienter, eller hos patienter vid olika examinationstillfällen, inte är lokaliserat och roterat på samma sätt, vilket är ett väsentligt antagande av läkaren vid granskning</p><p>av bilderna. Eftersom olika biomedicinska analytiker med olika mängd erfarenhet och expertis roterar bilderna olika uppkommer variation av de slutgiltiga bilder, vilket ofta kan vara ett problem vid diagnostisering.</p><p>En annan sorts nukleärmedicinsk undersökning är vid kvantifiering av dopaminreceptorer i de basala ganglierna i hjärnan. Detta utförs ofta på patienter som visar symptom av Parkinsons sjukdom, eller liknande sjukdomar. För att kunna bestämma graden av sjukdomen används ofta ett utförande för att räkna ut olika kvoter mellan områden runt dopaminreceptorerna. Detta är ett tröttsamt arbete för personen som utför kvantifieringen och trots att de insamlade bilderna är tredimensionella, utförs kvantifieringen allt för ofta endast på en eller flera skivor av bildvolymen. I likhet med myokardiell perfusionsundersökningar är variation mellan kvantifiering utförd av olika personer en möjlig felkälla.</p><p>I den här rapporten presenteras en ny metod för att automatiskt segmentera hjärtats vänstra kammare i SPECT-bilder. Segmenteringen är baserad på en intensitetsinvariant lokal-fasbaserad lösning, vilket eliminerar svårigheterna med den i myokardiella perfusionsbilder ofta varierande intensiteten. Dessutom används metoden för att uppskatta vinkeln hos hjärtats vänstra kammare. Efter att metoden sedan smått justerats används den som ett förslag på ett nytt sätt att automatiskt kvantifiera dopaminreceptorer i de basala ganglierna, vid användning av den radioaktiva lösningen DaTSCAN.</p> / <p>Nuclear medical imaging such as SPECT (Single Photon Emission Tomography) is an imaging modality which is readily used in many applications for measuring physiological properties of the human body. One very common type of examination using SPECT is when measuring myocardial perfusion (blood flow in the heart tissue), which is often used to examine e.g. a possible myocardial infarction (heart attack). In order for doctors to give a qualitative diagnose based on these images, the images must first be segmented and rotated by a medical technologist. This is performed due to the fact that the heart of different patients, or for patients at different times of examination, is not situated and rotated equally, which is an essential assumption for the doctor when examining the images. Consequently, as different technologists with different amount of experience and expertise will rotate images differently, variability between operators arises and can often become a problem in the process of diagnosing.</p><p>Another type of nuclear medical examination is when quantifying dopamine transporters in the basal ganglia in the brain. This is commonly done for patients showing symptoms of Parkinson’s disease or similar diseases. In order to specify the severity of the disease, a scheme for calculating different fractions between parts of the dopamine transporter area is often used. This is tedious work for the person performing the quantification, and despite the acquired three dimensional images, quantification is too often performed on one or more slices of the image volume. In resemblance with myocardial perfusion examinations, variability between different operators can also here present a possible source of errors.</p><p>In this thesis, a novel method for automatically segmenting the left ventricle of the heart in SPECT-images is presented. The segmentation is based on an intensity-invariant local-phase based approach, thus removing the difficulty of the commonly varying intensity in myocardial perfusion images. Additionally, the method is used to estimate the angle of the left ventricle of the heart. Furthermore, the method is slightly adjusted, and a new approach on automatically quantifying dopamine transporters in the basal ganglia using the DaTSCAN radiotracer is proposed.</p>
569

Application de la morphologie mathématique à l'analyse des conditions d'éclairage des images couleur

Risson, Valéry 17 December 2001 (has links) (PDF)
Cette thèse présente des outils d'analyse d'images couleur visant à extraire des informations pertinentes sur les conditions d'éclairage dans lesquelles ont été prises les photos. A travers ces outils, nous cherchons à comprendre le contenu sémantique d'une image en étudiant sa composante lumineuse. Ces connaissances sont utiles dans divers domaines d'imagerie tels que la réalité augmentée, la post-production cinématographique, l'indexation d'image et la reconnaissance des formes. L'information intrinsèque à la composante lumineuse n'est pas directement disponible à travers les données image. L'information contenue dans une image numérisée est le résultat de l'intégration et de la numérisation du flux spectral incident qui est modifié par les caractéristiques géométriques et spectrales des objets composant la scène. Nous identifions donc des objets sémantiques d'intérêt dans le cadre de notre problématique et nous développons les outils nécessaires pour les analyser. Dans ce but, nous nous appuyons sur des modèles physiques d'illumination pour décrire les phénomènes de réflexion lumineuse et comprendre comment ils se traduisent dans les données image. Dans un premier temps, nous présentons une approche photométrique de l'analyse des conditions d'éclairage qui s'articule autour d'un outil de détection des ombres dans les images couleur. L'information contenue dans les ombres permet de mesurer le contraste de luminance global sur les images, ce qui donne une indication sur le rapport entre la lumière directe et la lumière ambiante. Pour afiner l'analyse, nous présentons aussi un outil de détection de ciel qui permet d'identifier les conditions météorologiques au moment de la prise de vue. Selon que le ciel soit couvert, dégagé ou nuageux, les conditions d'éclairage varient et modifient l'aspect de l'image. Dans un deuxième temps, nous présentons une méthode de détection de la chrominance de l'illuminant. Cet outil reprend le principe de convergence chromatique basé sur le modèle de réflexion dichromatique. La convergence observée sur les surfaces inhomogènes est utilisée pour identifier la chrominance de l'illuminant. Les problèmes inhérents aux méthodes de détection existantes, liés à la nature statistique des traitements mis en oeuvre, trouvent une solution dans l'emploi de la segmentation morphologique couleur. Elle permet d'effectuer un découpage de l'image en zones homogènes en couleur et en luminance ; chaque région correspond à une réflectance spectrale particulière. Ensuite, un filltrage des régions est introduit pour éliminer celles qui ne vérifient pas les hypothèses de base du modèle de réflexion dichromatique. Enfin, les droites de convergences chromatiques calculées sur chaque région sont reportées sur le plan chromatique où est déterminée l'intersection entre le faisceau des droites et le locus des radiateurs de Planck. C'est le point correspondant aux coordonnées chromatiques de la chrominance de l'illuminant.
570

Segmentation et structuration d'un document vidéo pour la caractérisation et l'indexation de son contenu sémantique

Demarty, Claire-Hélène 24 January 2000 (has links) (PDF)
La multitude de documents multimédia déjà existants ou créés chaque jour nous confronte au problème de la recherche d' informations au sein de bases de données gigantesques qui rendent toute volonté d'indexation entièrement manuelle impossible. Dans ce contexte il est devenu nécessaire de concevoir et de construire des outils capables sinon d' extraire tout le contenu sémantique d'un document donné du moins d' en élaborer une première structuration de manière automatique. En se restreignant aux documents vidéo, cette thèse se propose donc de bâtir des outils automatiques réalisant une structuration en deux étapes. Tout d'abord linéaire, elle aboutit à un découpage d'un document vidéo en entités allant de la scène à l'image en passant par la prise de vue et le morceau de prise de vue. Puis relationnelle, elle consiste en l'extraction de relations par la mise en évidence de liens syntaxiques ou sémantiques de tout ordre entre deux entités de types quelconques. En plus de leur caractère général et automatique, l'ensemble des outils que nous présentons sont, en outre, conçus dans le respect d'une méthodologie précise. Cette dernière consiste à n'utiliser que des critères simples et de bas niveau de traitements d'images et tout particulièrement de morphologie mathématique qui combinés entre eux et avec des règles logiques de décision permettent déjà d'atteindre une structuration cohérente efficace et représentative d'un contenu informationnel de niveau sémantique élevé. Ce choix induit de plus une grande rapidité de nos outils puisque dans leur ensemble leur temps d'exécution est inférieur au temps réel. Leur validation est obtenue au travers de nombreux exemples et applications appartenant essentiellement à la classe des journaux télévisés.

Page generated in 0.0845 seconds