• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 12
  • 5
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 60
  • 60
  • 17
  • 13
  • 10
  • 9
  • 9
  • 9
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

以民族誌決策樹與模糊本體論法研究失智症照護之供需 / Investigation of the long-term institutional care requirements of patients with dementia and their families by qualitative and quantitative analysis

張清為, Chang, Chingwei Unknown Date (has links)
台灣在過去的數十年內,罹患失智症人口逐漸增多,其中的多數皆有接受了各層面的照護,舉凡藥物治療、醫護治療、復健治療以及職能治療,然其中的成效與需求之研究仍相當缺乏。故本研究採以質性與量性研究方法,以便於探索目前失智症患者家屬照護時所面臨的實際抉擇歷程與主要需求,並同時探索個案醫院內的治療效果與病患入院時狀況之關係,本研究希望藉由中部地區失智症病患照護的需求及機構之供給的角度來探索研究所能增進其醫療服務品質之處。 在質性研究方法部分,本研究以民族誌決策樹研究法來洞悉與探索家屬在面臨照護失智症病患時是否要採行機構式照護的決策歷程以及決策條件。藉由深度訪談結果粹取出的判斷準則發現,影響家屬決策之最主要考量為失智症病患者的失智程度,其餘包含道德規範、照護負擔、病患是否需要騎他的專業醫療照護以及照護中心的軟硬體環境。本研究整合考量這些判斷準則的優先順序、輕重緩急以及因果關係後將之建立決策樹,並以另外五十名家屬驗證該模型之預測力,得到預測準確率為92%。 此外,本研究再以量性方法來探索治療對於不同失智症病患的成效。結果顯示入住時狀況較好的失智症住民會以更積極的態度來接受職能治療,也因此他們擁有較大的改善或控制病情的機會,然而當住民以消極的態度接受職能治療時,則其治療效果遠不及積極治療者,也因此病情退步的機會較大,主要原因在於多數情況較差的住民具有攻擊、抗拒治療的傾向,使得照護工作變得更為艱鉅,故本研究建議家屬應重視職能治療以及與病人互動之重要性,不論是在居家照護亦或是機構式照護 / Over the past decade, the number of long-term care (LTC) residents has increased, and many have accepted treatments such as medication, rehabilitation and occupational therapy. This study employs both qualitative and quantitative techniques in order to discuss senile dementia patient care in long-term care institutions, and we use a supply and demand viewpoint to explore what services are really necessary for the patient and their family. In qualitative method, the main purpose of this stage is to use the ethnographic decision tree model to understand and explore the decision criteria of the subject. Our study found that the degree of dementia of the patient always affects the decisions made by family members – in fact, this is the most important of all criteria elicited from the interviews with family members. There are also ethical constraints, care burden, norm of filial obligation, patient need professional medical care and institutional environment, etc. which mentioned by families. We linked together the decision criteria considered most important in accounting for the decision-making sequence of family members to be the ethnographic decision tree model which predictive power is 92%. In quantitative stage, our study discussed the effectiveness of occupational therapy when given to dementia patients of different contexts. The results of this stage showed that patients of a good condition in the first stage present a more positive attitude towards participation in the occupational therapy designed by the institution; therefore, they have a greater chance of their condition improving or remaining the same. However, patients of an average condition have a more passive attitude towards taking part in any therapy; therefore, they have a greater chance of their condition deteriorating, because of their violent tendencies and their resistance to care, the task of caring for these patients is more difficult than caring for patients in the other groups. Above all, we suggest that families adopt the therapies no matter in homecare or institutionalization, as early as possible in order to improve the likelihood of being able to control the patient’s condition. It is understandable that accepting more therapies and interaction in the early stage of dementia, having higher chance to go well, however, by waiting until then they also miss the best opportunity to attempt to improve the patient’s condition, it is really not the good way we suggest to be.
52

DifFUZZY : a novel clustering algorithm for systems biology

Cominetti Allende, Ornella Cecilia January 2012 (has links)
Current studies of the highly complex pathobiology and molecular signatures of human disease require the analysis of large sets of high-throughput data, from clinical to genetic expression experiments, containing a wide range of information types. A number of computational techniques are used to analyse such high-dimensional bioinformatics data. In this thesis we focus on the development of a novel soft clustering technique, DifFUZZY, a fuzzy clustering algorithm applicable to a larger class of problems than other soft clustering approaches. This method is better at handling datasets that contain clusters that are curved, elongated or are of different dispersion. We show how DifFUZZY outperforms a number of frequently used clustering algorithms using a number of examples of synthetic and real datasets. Furthermore, a quality measure based on the diffusion distance developed for DifFUZZY is presented, which is employed to automate the choice of its main parameter. We later apply DifFUZZY and other techniques to data from a clinical study of children from The Gambia with different types of severe malaria. The first step was to identify the most informative features in the dataset which allowed us to separate the different groups of patients. This led to us reproducing the World Health Organisation classification for severe malaria syndromes and obtaining a reduced dataset for further analysis. In order to validate these features as relevant for malaria across the continent and not only in The Gambia, we used a larger dataset for children from different sites in Sub-Saharan Africa. With the use of a novel network visualisation algorithm, we identified pathobiological clusters from which we made and subsequently verified clinical hypotheses. We finish by presenting conclusions and future directions, including image segmentation and clustering time-series data. We also suggest how we could bridge data modelling with bioinformatics by embedding microarray data into cell models. Towards this end we take as a case study a multiscale model of the intestinal crypt using a cell-vertex model.
53

PROPOSTA DE CONTROLE NEBULOSO BASEADO EM CRITÉRIO DE ESTABILIDADE ROBUSTA NO DOMÍNIO DO TEMPO CONTÍNUO VIA ALGORITMO GENÉTICO MULTIOBJETIVO. / Nebulous control proposal based on stability criterion Robust in the field of continuous time Multiobjective genetic algorithm.

LIMA, Fernanda Maria Maciel de 31 August 2015 (has links)
Submitted by Maria Aparecida (cidazen@gmail.com) on 2017-08-24T11:30:17Z No. of bitstreams: 1 Fernanda Lima.pdf: 9275191 bytes, checksum: 7f56bba066e97503f4da03ab7ab861c9 (MD5) / Made available in DSpace on 2017-08-24T11:30:17Z (GMT). No. of bitstreams: 1 Fernanda Lima.pdf: 9275191 bytes, checksum: 7f56bba066e97503f4da03ab7ab861c9 (MD5) Previous issue date: 2015-08-31 / A fuzzy project Takagi-Sugeno (TS) with robust stability based on the specifications of the gain and phase margins via multi-objective genetic algorithm in continuos time domain is proposed in this master thesis. A Fuzzy C-means (FCM) clustering algorithm is used to estimate the antecedent parameters and rules number of a fuzzy TS model by means of the input and output experimental data of the plant to be controlled, while minimum squares algorithm estimate the consequent parameters. A multi-objective genetic strategy is defined to adjust the parameters of a fuzzy PID controller, so that, the gain and phase margins of the fuzzy control system are close to the specified values. Two theorems are proposed to analyse the necessary and sufficient conditions for the fuzzy PID controller design to ensure the robust stability in the close-loop control. The fuzzy PID controller was simulated in the Simulink environment and compared with lead and delay compensator. Experimental results obtained in a control platform in real time to validation the methodology proposed are presented and compared with fuzzy PID controller obtained by the Ziegler Nichols method. The results demonstrate the effectiveness and practical feasibility of the proposed methodology. / Um projeto de controle nebuloso Takagi-Sugeno(TS) com estabilidade robusta baseado nas especificações das margens de ganho e fase via algoritmo genético multiobjetivo no domínio do tempo contínuo é proposto nesta dissertação. Um algoritmo de agrupamento Fuzzy C-Means (FCM) é usado para estimar os parâmetros do antecedente e o número da regras de um modelo nebuloso TS, por meio dos dados experimentais de entrada e de saída da planta a ser controlada, enquanto que o algoritmo de mínimos quadrados estima os parâmetros do consequente. Uma estratégia genética multiobjetiva é definida para ajustar os parâmetros de um controlador PID nebuloso, de modo que, as margens de ganho e fase do sistema de controle nebuloso estejam próximos dos valores especificados. São propostos dois teoremas que analisam as condições necessárias e suficientes para o projeto do controlador PID nebuloso de modo a garantir a estabilidade robusta na malha de controle. O controlador PID nebuloso foi simulado no ambiente Simulink e comparado com compensadores de avanço e de atraso e os resultados analisados. Resultados experimentais obtidos em uma plataforma de controle, em tempo real, para validação da metodologia proposta são apresentados e comparado com controlador PID nebuloso obtido pelo método de Ziegler Nichols. Os resultados obtidos demonstram a eficácia e viabilidade prática da metodologia proposta.
54

模糊統計分類及其在茶葉品質評定的應用 / Analysis fuzzy statistical cluster and its application in tea quality

林雅慧, Lin, Ya-Hui Unknown Date (has links)
模糊理論開始於 1960 年代中期,關於這方面的研究與發展均已獲得相當不錯的成果.其中尤以在群落分析應用上的專題研究更是廣泛.Bezdek 提出的模糊分類演算法,乃根據 Dunn 的C平均法所作的一改良方法.但仍有其缺點,例如,未考慮權重且以靜態資料為主. 有鑑於此,本研究對 Bezdek 之方法加以改進推廣,提出加權模糊分類法.對於評價因素為多變量時,應加入模糊權重的考量.此外更結合時間因素,使準則函數成為動態的模式,將傳統的模糊分類法由靜態資料轉為動態資料形式,以反映真實 的情況. / Research on the theory of fuzzy sets has been growing steadily since itsinception during the mid-1960s. The literature especially dealing with fuzzycluster analysis is quite extensive. But the research on FCM still has somedisadvantages. For instance, the
55

模糊族群在穩健相關係數與穩健迴歸分析之應用 / Applications of fuzzy clustering method in robust correlation coefficient and robust regression analysis

黃圓修, Hwang, Yuan Shiou Unknown Date (has links)
在一般的研究過程中均可能有離群觀測值產生,只要有離群觀測值存在, 就可能對研究結果產生極重大的影響。在統計學上常用的參數估計式中, 有許多極易受離群觀測值影響。因此本研究採用模糊族群分析混合最大概 似估計演算法運用在參數估計上,以去除離群觀測值對分析結果的影響。 本研究主要針對相關係數與迴歸係數的估計進行探討,利用演算法中所求 得之隸屬度,計算穩健相關係數和穩健迴歸係數,以期能正確估計參數值 。
56

Hydrologic Impacts Of Climate Change : Uncertainty Modeling

Ghosh, Subimal 07 1900 (has links)
General Circulation Models (GCMs) are tools designed to simulate time series of climate variables globally, accounting for effects of greenhouse gases in the atmosphere. They attempt to represent the physical processes in the atmosphere, ocean, cryosphere and land surface. They are currently the most credible tools available for simulating the response of the global climate system to increasing greenhouse gas concentrations, and to provide estimates of climate variables (e.g. air temperature, precipitation, wind speed, pressure etc.) on a global scale. GCMs demonstrate a significant skill at the continental and hemispheric spatial scales and incorporate a large proportion of the complexity of the global system; they are, however, inherently unable to represent local subgrid-scale features and dynamics. The spatial scale on which a GCM can operate (e.g., 3.75° longitude x 3.75° latitude for Coupled Global Climate Model, CGCM2) is very coarse compared to that of a hydrologic process (e.g., precipitation in a region, streamflow in a river etc.) of interest in the climate change impact assessment studies. Moreover, accuracy of GCMs, in general, decreases from climate related variables, such as wind, temperature, humidity and air pressure to hydrologic variables such as precipitation, evapotranspiration, runoff and soil moisture, which are also simulated by GCMs. These limitations of the GCMs restrict the direct use of their output in hydrology. This thesis deals with developing statistical downscaling models to assess climate change impacts and methodologies to address GCM and scenario uncertainties in assessing climate change impacts on hydrology. Downscaling, in the context of hydrology, is a method to project the hydrologic variables (e.g., rainfall and streamflow) at a smaller scale based on large scale climatological variables (e.g., mean sea level pressure) simulated by a GCM. A statistical downscaling model is first developed in the thesis to predict the rainfall over Orissa meteorological subdivision from GCM output of large scale Mean Sea Level Pressure (MSLP). Gridded monthly MSLP data for the period 1948 to 2002, are obtained from the National Center for Environmental Prediction/ National Center for Atmospheric Research (NCEP/NCAR) reanalysis project for a region spanning 150 N -250 N in latitude and 800 E -900 E in longitude that encapsulates the study region. The downscaling model comprises of Principal Component Analysis (PCA), Fuzzy Clustering and Linear Regression. PCA is carried out to reduce the dimensionality of the larger scale MSLP and also to convert the correlated variables to uncorrelated variables. Fuzzy clustering is performed to derive the membership of the principal components in each of the clusters and the memberships obtained are used in regression to statistically relate MSLP and rainfall. The statistical relationship thus obtained is used to predict the rainfall from GCM output. The rainfall predicted with the GCM developed by CCSR/NIES with B2 scenario presents a decreasing trend for non-monsoon period, for the case study. Climate change impact assessment models developed based on downscaled GCM output are subjected to a range of uncertainties due to both ‘incomplete knowledge’ and ‘unknowable future scenario’ (New and Hulme, 2000). ‘Incomplete knowledge’ mainly arises from inadequate information and understanding about the underlying geophysical process of global change, leading to limitations in the accuracy of GCMs. This is also termed as GCM uncertainty. Uncertainty due to ‘unknowable future scenario’ is associated with the unpredictability in the forecast of socio-economic and human behavior resulting in future Green House Gas (GHG) emission scenarios, and can also be termed as scenario uncertainty. Downscaled outputs of a single GCM with a single climate change scenario represent a single trajectory among a number of realizations derived using various GCMs and scenarios. Such a single trajectory alone can not represent a future hydrologic scenario, and will not be useful in assessing hydrologic impacts due to climate change. Nonparametric methods are developed in the thesis to model GCM and scenario uncertainty for prediction of drought scenario with Orissa meteorological subdivision as a case study. Using the downscaling technique described in the previous paragraph, future rainfall scenarios are obtained for all available GCMs and scenarios. After correcting for bias, equiprobability transformation is used to convert the precipitation into Standardized Precipitation Index-12 (SPI-12), an annual drought indicator, based on which a drought may be classified as a severe drought, mild drought etc. Disagreements are observed between different predictions of SPI-12, resulting from different GCMs and scenarios. Assuming SPI-12 to be a random variable at every time step, nonparametric methods based on kernel density estimation and orthonormal series are used to determine the nonparametric probability density function (pdf) of SPI-12. Probabilities for different categories of drought are computed from the estimated pdf. It is observed that there is an increasing trend in the probability of extreme drought and a decreasing trend in the probability of near normal conditions, in the Orissa meteorological subdivision. The single valued Cumulative Distribution Functions (CDFs) obtained from nonparametric methods suffer from limitations due to the following: (a) simulations for all scenarios are not available for all the GCMs, thus leading to a possibility that incorporation of these missing climate experiments may result in a different CDF, (b) the method may simply overfit to a multimodal distribution from a relatively small sample of GCMs with a limited number of scenarios, and (c) the set of all scenarios may not fully compose the universal sample space, and thus, the precise single valued probability distribution may not be representative enough for applications. To overcome these limitations, an interval regression is performed to fit an imprecise normal distribution to the SPI-12 to provide a band of CDFs instead of a single valued CDF. Such a band of CDFs represents the incomplete nature of knowledge, thus reflecting the extent of what is ignored in the climate change impact assessment. From imprecise CDFs, the imprecise probabilities of different categories of drought are computed. These results also show an increasing trend of the bounds of the probability of extreme drought and decreasing trend of the bounds of the probability of near normal conditions, in the Orissa meteorological subdivision. Water resources planning requires the information about future streamflow scenarios in a river basin to combat hydrologic extremes resulting from climate change. It is therefore necessary to downscale GCM projections for streamflow prediction at river basin scales. A statistical downscaling model based on PCA, fuzzy clustering and Relevance Vector Machine (RVM) is developed to predict the monsoon streamflow of Mahanadi river at Hirakud reservoir, from GCM projections of large scale climatological data. Surface air temperature at 2m, Mean Sea Level Pressure (MSLP), geopotential height at a pressure level of 500 hecto Pascal (hPa) and surface specific humidity are considered as the predictors for modeling Mahanadi streamflow in monsoon season. PCA is used to reduce the dimensionality of the predictor dataset and also to convert the correlated variables to uncorrelated variables. Fuzzy clustering is carried out to derive the membership of the principal components in each of the clusters and the memberships thus obtained are used in RVM regression model. RVM involves fewer number of relevant vectors and the chance of overfitting is less than that of Support Vector Machine (SVM). Different kernel functions are used for comparison purpose and it is concluded that heavy tailed Radial Basis Function (RBF) performs best for streamflow prediction with GCM output for the case considered. The GCM CCSR/NIES with B2 scenario projects a decreasing trend in future monsoon streamflow of Mahanadi which is likely to be due to high surface warming. A possibilistic approach is developed next, for modeling GCM and scenario uncertainty in projection of monsoon streamflow of Mahanadi river. Three GCMs, Center for Climate System Research/ National Institute for Environmental Studies (CCSR/NIES), Hadley Climate Model 3 (HadCM3) and Coupled Global Climate Model 2 (CGCM2) with two scenarios A2 and B2 are used for the purpose. Possibilities are assigned to GCMs and scenarios based on their system performance measure in predicting the streamflow during years 1991-2005, when signals of climate forcing are visible. The possibilities are used as weights for deriving the possibilistic mean CDF for the three standard time slices, 2020s, 2050s and 2080s. It is observed that the value of streamflow at which the possibilistic mean CDF reaches the value of 1 reduces with time, which shows reduction in probability of occurrence of extreme high flow events in future and therefore there is likely to be a decreasing trend in the monthly peak flow. One possible reason for such a decreasing trend may be the significant increase in temperature due to climate warming. Simultaneous occurrence of reduction in Mahandai streamflow and increase in extreme drought in Orissa meteorological subdivision is likely to pose a challenge for water resources engineers in meeting water demands in future.
57

Vibrational spectroscopy of keratin fibres : A forensic approach

Panayiotou, Helen January 2004 (has links)
Human hair profiling is an integral part of a forensic investigation but it is one of the most technically difficult subjects in forensic science. This thesis describes the research and development of a novel approach for the rapid identification of unknown human and other related keratin fibres found at a crime scene. The work presented here is developed systematically and considers sample collection, sample preparation, analysis and interpretation of spectral data for the profiling of hair fibres encountered in criminal cases. Spectral comparison of fibres was facilitated with the use of chemometrics methods such as PCA, SIMCA and Fuzzy Clustering, and the less common approach of multi-criteria decision making methodology (MCDM). The aim of the thesis was to investigate the potential of some vibrational spectroscopy techniques for matching and discrimination of single keratin hair fibres in the context of forensic evidence. The first objective (chapter 3) of the thesis was to evaluate the use of Raman and FT-IR micro-spectroscopy techniques for the forensic sampling of hair fibres and to propose the preferred technique for future forensic hair comparisons. The selection of the preferred technique was based on criteria such as spectral quality, ease of use, rapid analysis and universal application to different hair samples. FT-IR micro-spectroscopy was found to be the most appropriate technique for hair analysis because it enabled the rapid collection of spectra from a wide variety of hair fibres. Raman micro-spectroscopy, on the other hand, was hindered with fluorescence problems and did not allow the collection of spectra from pigmented fibres. This objective has therefore shown that FT-IR micro-spectroscopy is the preferable spectroscopic technique for forensic analysis of hair fibres, whilst Raman spectroscopy is the least preferred. The second objective (chapter 3) was to investigate, through a series of experiments, the effect of chemical treatment on the micro-environment of human hair fibres. The effect of bleaching agents on the hair fibres was studied with some detail at different treatment times and the results indicate a significant change in the chemical environment of the secondary structure of the hair fibre along with changes in the C-C backbone structure. One of the most important outcomes of this research was the behaviour of the fÑ-helix during chemical treatment. The hydrogen bonding in the fÑ-helix provides for the stable structure of the fibre and therefore any disruption to the fÑ-helix will inevitably damage the molecular structure of the fibre. The results highlighted the behaviour of the fÑ-helix, which undergoes a significant decrease in content during oxidation, and is partly converted to a random-coil structure, whilst the fÒ-sheet component of the secondary structure remains unaffected. The reported investigations show that the combination of FT-IR and Raman micro-spectroscopy can provide an insight and understanding into the complex chemical properties and reactions within a treated hair fibre. Importantly, this work demonstrates that with the aid of chemometrics, it is possible to investigate simultaneously FT-IR and Raman micro-spectroscopic information from oxidised hair fibres collected from one subject and treated at different times. The discrimination and matching of hair fibres on the basis of treatment has potential forensic applications. The third objective (chapter 4) attempted to expand the forensic application of FT-IR micro-spectroscopy to other keratin fibres. Animal fibres are commonly encountered in crime scenes and it thus becomes important to establish the origin of those fibres. The aim of this work was to establish the forensic applications of FT-IR micro-spectroscopy to animal fibres and to investigate any fundamental molecular differences between these fibres. The results established a discrimination between fibres consisting predominantly of fÑ-helix and those containing mainly a fÒ-sheet structure. More importantly, it was demonstrated through curve-fitting and chemometrics, that each keratin fibre contains a characteristic secondary structure arrangement. The work presented here is the first detailed FT-IR micro-spectroscopic study, utilising chemometrics as well as MCDM methods, for a wide range of keratin fibres, which are commonly, found as forensic evidence. Furthermore, it was demonstrated with the aid of the rank ordering MCDM methods PROMETHEE and GAIA, that it is possible to rank and discriminate keratin fibres according to their molecular characteristics obtained from direct measurements together with information sourced from the literature. The final objective (chapter 5) of the thesis was to propose an alternative method for the discrimination and matching of single scalp human hair fibres through the use of FT-IR micro-spectroscopy and chemometrics. The work successfully demonstrated, through a number of case scenarios, the application of the technique for the identification of variables such as gender and race for an unknown single hair fibre. In addition, it was also illustrated that known hair fibres (from the suspect or victim) can be readily matched to the unknown hair fibres found at the crime scene. This is the first time that a substantial, systematic FT-IR study of forensic hair identification has been presented. The research has shown that it is possible to model and correlate individual¡¦s characteristics with hair properties at molecular level with the use of chemometrics methods. A number of different, important forensic variables of immediate use to police in a crime scene investigation such as gender, race, treatment, black and white hair fibres were investigated. Blind samples were successfully applied both to validate available experimental data and extend the current database of experimental determinations. Protocols were posed for the application of this methodology in the future. The proposed FT-IR methodology presented in this thesis has provided an alternative approach to the characterisation of single scalp human hair fibres. The technique enables the rapid collection of spectra, followed by the objective analytical capabilities of chemometrics to successfully discriminate animal fibres, human hair fibres from different sources, treated from untreated hair fibres, as well as black and white hair fibres, on the basis of their molecular structure. The results can be readily produced and explained in the courts of law. Although the proposed relatively fast FT-IR technique is not aimed at displacing the two slower existing methods of hair analysis, namely comparative optical microscopy and DNA analysis, it has given a new dimension to the characterisation of hair fibres at a molecular level, providing a powerful tool for forensic investigations.
58

AIM - A Social Media Monitoring System for Quality Engineering

Bank, Mathias 14 June 2013 (has links)
In the last few years the World Wide Web has dramatically changed the way people are communicating with each other. The growing availability of Social Media Systems like Internet fora, weblogs and social networks ensure that the Internet is today, what it was originally designed for: A technical platform in which all users are able to interact with each other. Nowadays, there are billions of user comments available discussing all aspects of life and the data source is still growing. This thesis investigates, whether it is possible to use this growing amount of freely provided user comments to extract quality related information. The concept is based on the observation that customers are not only posting marketing relevant information. They also publish product oriented content including positive and negative experiences. It is assumed that this information represents a valuable data source for quality analyses: The original voices of the customers promise to specify a more exact and more concrete definition of \"quality\" than the one that is available to manufacturers or market researchers today. However, the huge amount of unstructured user comments makes their evaluation very complex. It is impossible for an analysis protagonist to manually investigate the provided customer feedback. Therefore, Social Media specific algorithms have to be developed to collect, pre-process and finally analyze the data. This has been done by the Social Media monitoring system AIM (Automotive Internet Mining) that is the subject of this thesis. It investigates how manufacturers, products, product features and related opinions are discussed in order to estimate the overall product quality from the customers\\\'' point of view. AIM is able to track different types of data sources using a flexible multi-agent based crawler architecture. In contrast to classical web crawlers, the multi-agent based crawler supports individual crawling policies to minimize the download of irrelevant web pages. In addition, an unsupervised wrapper induction algorithm is introduced to automatically generate content extraction parameters which are specific for the crawled Social Media systems. The extracted user comments are analyzed by different content analysis algorithms to gain a deeper insight into the discussed topics and opinions. Hereby, three different topic types are supported depending on the analysis needs. * The creation of highly reliable analysis results is realized by using a special context-aware taxonomy-based classification system. * Fast ad-hoc analyses are applied on top of classical fulltext search capabilities. * Finally, AIM supports the detection of blind-spots by using a new fuzzified hierarchical clustering algorithm. It generates topical clusters while supporting multiple topics within each user comment. All three topic types are treated in a unified way to enable an analysis protagonist to apply all methods simultaneously and in exchange. The systematically processed user comments are visualized within an easy and flexible interactive analysis frontend. Special abstraction techniques support the investigation of thousands of user comments with minimal time efforts. Hereby, specifically created indices show the relevancy and customer satisfaction of a given topic.:1 Introduction 1.1 Chapter Overview 2 Problem Definition and Data Environment 2.1 Commonly Applied Quality Sensors 2.2 The Growing Importance of Social Media 2.3 Social Media based Quality Experience 2.4 Change to the Holistic Concept of Quality 2.5 Definition of User Generated Content and Social Media 2.6 Social Media Software Architectures 3 Data Collection 3.1 Related Work 3.2 Requirement Analysis 3.3 A Blackboard Crawler Architecture 3.4 Semi-supervised Wrapper Generation 3.5 Structure Modifification Detection 3.6 Conclusion 4 Hierarchical Fuzzy Clustering 4.1 Related Work 4.2 Generalization of Agglomerative Crisp Clustering Algorithms 4.3 Topic Groups Generation 4.4 Evaluation 4.5 Conclusion 5 A Social Media Monitoring System for Quality Analyses 5.1 Related Work 5.2 Pre-Processing Workflow 5.3 Quality Indices 5.4 AIM Architecture 5.5 Evaluation 5.6 Conclusion 6 Conclusion and Perspectives 6.1 Contributions and Conclusions 6.2 Perspectives Bibliography / In den letzten Jahren hat sich das World Wide Web dramatisch verändert. War es vor einigen Jahren noch primär eine Informationsquelle, in der ein kleiner Anteil der Nutzer Inhalte veröffentlichen konnte, so hat sich daraus eine Kommunikationsplattform entwickelt, in der jeder Nutzer aktiv teilnehmen kann. Die dadurch enstehende Datenmenge behandelt jeden Aspekt des täglichen Lebens. So auch Qualitätsthemen. Die Analyse der Daten verspricht Qualitätssicherungsmaßnahmen deutlich zu verbessern. Es können dadurch Themen behandelt werden, die mit klassischen Sensoren schwer zu messen sind. Die systematische und reproduzierbare Analyse von benutzergenerierten Daten erfordert jedoch die Anpassung bestehender Tools sowie die Entwicklung neuer Social-Media spezifischer Algorithmen. Diese Arbeit schafft hierfür ein völlig neues Social Media Monitoring-System, mit dessen Hilfe ein Analyst tausende Benutzerbeiträge mit minimaler Zeitanforderung analysieren kann. Die Anwendung des Systems hat einige Vorteile aufgezeigt, die es ermöglichen, die kundengetriebene Definition von \"Qualität\" zu erkennen.:1 Introduction 1.1 Chapter Overview 2 Problem Definition and Data Environment 2.1 Commonly Applied Quality Sensors 2.2 The Growing Importance of Social Media 2.3 Social Media based Quality Experience 2.4 Change to the Holistic Concept of Quality 2.5 Definition of User Generated Content and Social Media 2.6 Social Media Software Architectures 3 Data Collection 3.1 Related Work 3.2 Requirement Analysis 3.3 A Blackboard Crawler Architecture 3.4 Semi-supervised Wrapper Generation 3.5 Structure Modifification Detection 3.6 Conclusion 4 Hierarchical Fuzzy Clustering 4.1 Related Work 4.2 Generalization of Agglomerative Crisp Clustering Algorithms 4.3 Topic Groups Generation 4.4 Evaluation 4.5 Conclusion 5 A Social Media Monitoring System for Quality Analyses 5.1 Related Work 5.2 Pre-Processing Workflow 5.3 Quality Indices 5.4 AIM Architecture 5.5 Evaluation 5.6 Conclusion 6 Conclusion and Perspectives 6.1 Contributions and Conclusions 6.2 Perspectives Bibliography
59

Οργάνωση και διαχείριση βάσεων εικόνων βασισμένη σε τεχνικές εκμάθησης δεδομένων πολυσχιδούς δομής

Μακεδόνας, Ανδρέας 22 December 2009 (has links)
Το ερευνητικό αντικείμενο της συγκεκριμένης διατριβής αναφέρεται στην επεξεργασία έγχρωμης εικόνας με χρήση της θεωρίας γράφων, την ανάκτηση εικόνας καθώς και την οργάνωση / διαχείριση βάσεων δεδομένων με μεθόδους γραφημάτων και αναγνώρισης προτύπων, με εφαρμογή σε πολυμέσα. Τα συγκεκριμένα προβλήματα προσεγγίστηκαν διατηρώντας τη γενικότητά τους και επιλύθηκαν με βάση τα ακόλουθα σημεία: 1. Ανάπτυξη τεχνικών για την επιλογή χαρακτηριστικών από τις εικόνες βάσει χαρακτηριστικών χαμηλού επιπέδου (χρώματος και υφής), για χρήση τους σε εφαρμογές ομοιότητας και ανάκτησης εικόνας. 2. Υπολογισμός μετρικών και αποστάσεων στο χώρο των χαρακτηριστικών. 3. Μελέτη της πολυσχιδούς δομής των εικόνων μιας βάσης στο χώρο των χαρακτηριστικών. 4. Ελάττωση της διάστασης του χώρου και παραγωγή αναπαραστάσεων δύο διαστάσεων. 5. Εφαρμογή των μεθόδων αυτών σε υποκειμενικές αποστάσεις εικόνων. Η θεωρία γράφων και οι μέθοδοι αναγνώρισης προτύπων χρησιμοποιήθηκαν προκειμένου να παρουσιαστούν βέλτιστες λύσεις αφενός στο πρόβλημα της ανάκτησης εικόνων από βάσεις δεδομένων και αφετέρου στην οργάνωση και διαχείριση τέτοιων βάσεων εικόνων. Η διατριβή φέρνει πιο κοντά την επεξεργασία εικόνας με μεθόδους προερχόμενες από τη θεωρία γραφημάτων, τη στατιστική και την αναγνώριση προτύπων. Σε όλη τη διάρκεια της διατριβής, ιδιαίτερη έμφαση δόθηκε στο ζήτημα της εύρεσης του κατάλληλου συνδυασμού μεταξύ της αποτελεσματικότητας των συστημάτων και της αποδοτικότητας στα πλαίσια της εφαρμογής των προτεινόμενων αλγοριθμικών διαδικασιών. Τα αναλυτικά πειραματικά αποτελέσματα που πραγματοποιήθηκαν, αποδεικνύουν την βελτιωμένη απόδοση των προτεινόμενων μεθοδολογιών. / The subject of this doctoral thesis is related to color image processing using graph theoretic methods, image retrieval and image database management and organization in the reduced feature space, using pattern recognition analysis, with multimedia applications. The author attempted to approach the thesis subject by retaining its genericness and addressing the following points: 1. Development of techniques for extraction of image visual attributes based on low level features (color and texture information), to be used for image similarity and retrieval practices. 2. Calculation of metrics and distances in the feature space. 3. Study of the image manifolds created in the selected feature space. 4. Application of dimensionality reduction techniques and production of biplots. 5. Application of the proposed methodologies using perceptual image distances. Graph theory and pattern recognition methodologies were incorporated in order to provide novel solution to color image retrieval of image databases, as well as to image database management and organization. The current thesis brings closer image processing with graph theoretic methodologies, statistical analysis and pattern recognition. Throughout the thesis, consideration has been taken for finding the best trade off between effectiveness and efficiency when applying the proposed algorithmic procedures. The extended experimental results carried out in all stages of the projected studies reveal the enhanced performance of the proposed methodologies.
60

Channel Probing for an Indoor Wireless Communications Channel

Hunter, Brandon 13 March 2003 (has links) (PDF)
The statistics of the amplitude, time and angle of arrival of multipaths in an indoor environment are all necessary components of multipath models used to simulate the performance of spatial diversity in receive antenna configurations. The model presented by Saleh and Valenzuela, was added to by Spencer et. al., and included all three of these parameters for a 7 GHz channel. A system was built to measure these multipath parameters at 2.4 GHz for multiple locations in an indoor environment. Another system was built to measure the angle of transmission for a 6 GHz channel. The addition of this parameter allows spatial diversity at the transmitter along with the receiver to be simulated. The process of going from raw measurement data to discrete arrivals and then to clustered arrivals is analyzed. Many possible errors associated with discrete arrival processing are discussed along with possible solutions. Four clustering methods are compared and their relative strengths and weaknesses are pointed out. The effects that errors in the clustering process have on parameter estimation and model performance are also simulated.

Page generated in 0.0775 seconds