• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 597
  • 284
  • 85
  • 61
  • 40
  • 18
  • 17
  • 16
  • 16
  • 16
  • 15
  • 12
  • 6
  • 5
  • 5
  • Tagged with
  • 1340
  • 235
  • 168
  • 162
  • 139
  • 124
  • 110
  • 108
  • 103
  • 92
  • 90
  • 89
  • 89
  • 82
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

Acquisition of Fuzzy Measures in Multicriteria Decision Making Using Similarity-based Reasoning

Wagholikar, Amol S, N/A January 2007 (has links)
Continuous development has been occurring in the area of decision support systems. Modern systems focus on applying decision models that can provide intelligent support to the decision maker. These systems focus on modelling the human reasoning process in situations requiring decision. This task may be achieved by using an appropriate decision model. Multicriteria decision making (MCDM) is a common decision making approach. This research investigates and seeks a way to resolve various issues associated with the application of this model. MCDM is a formal and systematic decision making approach that evaluates a given set of alternatives against a given set of criteria. The global evaluation of alternatives is determined through the process of aggregation. It is well established that the aggregation process should consider the importance of criteria while determining the overall worth of an alternative. The importance of individual criteria and of sub-sets of the criteria affects the global evaluation. The aggregation also needs to consider the importance of the sub-set of criteria. Most decision problems involve dependent criteria and the interaction between the criteria needs to be modelled. Traditional aggregation approaches, such as weighted average, do not model the interaction between the criteria. Non-additive measures such as fuzzy measures model the interaction between the criteria. However, determination of non-additive measures in a practical application is problematic. Various approaches have been proposed to resolve the difficulty in acquisition of fuzzy measures. These approaches mainly propose use of past precedents. This research extends this notion and proposes an approach based on similarity-based reasoning. Solutions to the past problems can be used to solve the new decision problems. This is the central idea behind the proposed methodology. The methodology itself applies the theory of reasoning by analogy for solving MCDM problems. This methodology uses a repository of cases of past decision problems. This case base is used to determine the fuzzy measures for the new decision problem. This work also analyses various similarity measures. The illustration of the proposed methodology in a case-based decision support system shows that interactive models are suitable tools for determining fuzzy measures in a given decision problem. This research makes an important contribution by proposing a similarity-based approach for acquisition of fuzzy measures.
372

模糊統計在時間數列分析與相似度之應用 / Application of fuzzy statistics in time series analysis and similarity recognition

張建瑋 Unknown Date (has links)
在時間數列的分析上,由於一些辨識模型結構的方法,常受制於時間數列本身的非定態及不確定干擾的影響,因此若以單一模式來配適數列往往不能得到滿意的結果。 此外,傳統的統計方法太依賴數字本身,但當一時間數列其資料呈相當的模糊性時,我們往往僅對其走勢感興趣,故若能從圖形識別的觀點,找出與此時間數列具有高度相似性的資料,以作為此時間數列的領先指標或參考指標,應可比傳統單一時間數列模式(無論是線性或非線性)更能解釋資料走勢及解決結構性改變之問題,並能夠即時反應最丟出伏況,增加預測之準確性。 在本文中,我們考慮應用模糊理論建立一時間數列模糊相似性演算法 ,來辨識時間數列之間的相似性。在執行此演算法的過程申,我們依資料的特性如變異數是否改變、是否有離群值或突發值干擾等的不同,提出值域均分法、k-means值域均分法及Rank轉換法等三種方法來建構隸屬度函數 ,以求得對資料更好的解釋及預測結果。模擬的結果顯示 ,值域均分法在時間數列間的模糊相似性辦識表現最好。而在實證分析中,我們以此演算法來辨別GDP與民間消費、GDP與毛投資之間的模糊相似性,其結果相當不錯。 / An important problem in pattern recognition of a time series is similarity recognition. This paper presents the methods of similarity calculation for two time series. The methods considered include equally divided range method, K-rneans method and rank transformed method. The success of our similarity recognition relies a large extent on the fuzzy statistical concept. Simulation results demonstrate that, overall, the equally divided range method performed best in the similarity recognition. While other methods provide superior efficiency in calculating similarity for certain special time series. Finally two empirical examples, similarity calculating about GDP vs. Consumption and GDP vs. Invest, are illustrated.
373

Reaching for the optimal : The role of optimal alternatives in pre-decision making stages

Kerimi, Neda January 2007 (has links)
<p>It was hypothesized that in a decision-making situation, individuals will not only think of an optimal alternative but also as the most promising alternative, choose the alternative that is closest to their optimal alternative. Therefore, based on participants’ optimal alternative, five alternatives, each equal in terms of constant Multi-attribute Utility, were presented to participants. Two of the alternatives were constructed to be most similar to the participant’s optimal alternative, two alternatives were associated with two non-compensatory rules, and one alternative was not linked to any decision making rule. Results showed that participants thought not only of an optimal alternative in the given decision-making situation, they also chose the alternative that was most similar to their optimal. This alternative also got highest preference ratings. These findings present an optimal alternative. In addition, they demonstrate the influence that such an alternative have on the outcome in a decision-making situation.</p>
374

Classification System for Impedance Spectra

Sapp, Carl Gordon 01 May 2011 (has links)
This thesis documents research, methods, and results to satisfy the requirements for the M.S. degree in Electrical Engineering at the University of Tennessee. This thesis explores two primary steps for proper classification of impedance spectra: data dimension reduction and effectiveness of similarity/dissimilarity measure comparison in classification. To understand the data characteristics and classification thresholds, a circuit model analysis for simulation and unclassifiable determination is studied. The research is conducted using previously collected data of complex valued impedance measurements taken from 1844 similar devices. The results show a classification system capable of proper classification of 99% of data samples with well-separated data and approximately 85% using the full range of data available to this research.
375

Similarity in personal relationships : associations with relationship regulation between and within individuals

Wrzus, Cornelia January 2008 (has links)
People engage in a multitude of different relationships. Relatives, spouses, and friends are modestly to moderately similar in various characteristics, e.g., personality characteristics, interests, appearance. The role of psychological (e.g., skills, global appraisal) and social (e.g., gender, familial status) similarities in personal relationships and the association with relationship quality (emotional closeness and reciprocity of support) were examined in four independent studies. Young adults (N = 456; M = 27 years) and middle-aged couples from four different family types (N = 171 couples, M = 38 years) gave answer to a computer-aided questionnaire regarding their ego-centered networks. A subsample of 175 middle-aged adults (77 couples and 21 individuals) participated in a one-year follow-up questioning. Two experimental studies (N = 470; N = 802), both including two assessments with an interval of five weeks, were conducted to examine causal relationships among similarity, closeness, and reciprocity expectations. Results underline the role of psychological and social similarities as covariates of emotional closeness and reciprocity of support on the between-relationship level, but indicate a relatively weak effect within established relationships. In specific relationships, such as parent-child relationships and friendships, psychological similarity partly alleviates the effects of missing genetic relatedness. Individual differences moderate these between-relationship effects. In all, results combine evolutionary and social psychological perspectives on similarity in personal relationships and extend previous findings by means of a network approach and an experimental manipulation of existing relationships. The findings further show that psychological and social similarity have different implications for the study of personal relationships depending on the phase in the developmental process of relationships. / Verwandte, Partner und Freunde ähneln sich in einer Vielzahl von Merkmalen wie z.B. Persönlichkeitseigenschaften, Einstellungen oder Aussehen. Die Bedeutung von Ähnlichkeit in psychologischen und demografischen Eigenschaften von Beziehungspartnern und die Zusammenhänge mit der Qualität der Beziehung wurden in vier unabhängigen Studien untersucht. Junge Erwachsene (N = 456; M = 27 Jahre) und Paare aus vier verschiedenen Familienformen (N = 171 Paare, M = 38 Jahre) beurteilten in einem PC-gestützten Fragebogen die sozialen Beziehungen in ihrem ego-zentrierten Netzwerk hinsichtlich wahrgenommener Ähnlichkeit, emotionaler Nähe und Reziprozität der Unterstützung. Ein Teil der Paare (77 Paare und 21 Einzelpersonen) nahm an der Ein-Jahres-Längsschnittstudie teil. In zwei Experimenten (N = 470; N = 802) wurde die Wahrnehmung von Ähnlichkeit manipuliert, um die Kausalwirkung auf die emotionale Nähe und die Erwartung von reziprokem Verhalten in Beziehungen zu prüfen. Die Studien zeigten, dass innerhalb eines sozialen Netzwerkes ähnliche Beziehungspartner auch emotional näher beurteilt wurden, es jedoch kaum wechselseitige Beeinflussungen innerhalb bestehender Beziehungen gab. In spezifischen Beziehungen, wie Eltern-Kind- oder Freundschaftsbeziehungen, konnte psychologische Ähnlichkeit den Effekt der fehlenden genetischen Verwandtschaft teilweise aufheben. Merkmale der Person moderierten diese Zusammenhänge auf der Beziehungsebene. Die Ergebnisse verknüpfen die evolutionspsychologische und die sozialpsychologische Perspektive der Ähnlichkeitsforschung und ergänzen bisherige Befunde durch den Einsatz des Sozialen Netzwerkansatzes und der experimentellen Manipulation von bestehenden Beziehungen. Zudem zeigen die Befunde, dass psychologische und demografische Ähnlichkeit unterschiedliche Implikationen für die Beziehungsforschung haben, in Abhängigkeit von der Entwicklungsphase der Beziehung.
376

Management of Real-Time Data Consistency and Transient Overloads in Embedded Systems

Gustafsson, Thomas January 2007 (has links)
This thesis addresses the issues of data management in embedded systems' software. The complexity of developing and maintaining software has increased over the years due to increased availability of resources, e.g., more powerful CPUs and larger memories, as more functionality can be accommodated using these resources. In this thesis, it is proposed that part of the increasing complexity can be addressed by using a real-time database since data management is one constituent of software in embedded systems. This thesis investigates which functionality a real-time database should have in order to be suitable for embedded software that control an external environment. We use an engine control software as a case study of an embedded system. The findings are that a real-time database should have support for keeping data items up-todate, providing snapshots of values, i.e., the values are derived from the same system state, and overload handling. Algorithms are developed for each one of these functionalities and implemented in a real-time database for embedded systems. Performance evaluations are conducted using the database implementation. The evaluations show that the real-time performance is improved by utilizing the added functionality. Moreover, two algorithms for examining whether the system may become overloaded are also outlined; one algorithm for off-line use and the second algorithm for on-line use. Evaluations show the algorithms are accurate and fast and can be used for embedded systems.
377

A Software Benchmarking Methodology For Effort Estimation

Nabi, Mina 01 September 2012 (has links) (PDF)
Software project managers usually use benchmarking repositories to estimate effort, cost, and duration of the software development which will be used to appropriately plan, monitor and control the project activities. In addition, precision of benchmarking repositories is a critical factor in software effort estimation process which plays subsequently a critical role in the success of the software development project. In order to construct such a precise benchmarking data repository, it is important to have defined benchmarking data attributes and data characteristics and to have collected project data accordingly. On the other hand, studies show that data characteristics of benchmark data sets have impact on generalizing the studies which are based on using these datasets. Quality of data repository is not only depended on quality of collected data, but also it is related to how these data are collected. In this thesis, a benchmarking methodology is proposed for organizations to collect benchmarking data for effort estimation purposes. This methodology consists of three main components: benchmarking measures, benchmarking data collection processes, and benchmarking data collection tool. In this approach results of previous studies from the literature were used too. In order to verify and validate the methodology project data were collected in two middle size software organizations and one small size organization by using automated benchmarking data collection tool. Also, effort estimation models were constructed and evaluated for these projects data and impact of different characteristics of the projects was inspected in effort estimation models.
378

Probabilistic Simhash Matching

Sood, Sadhan 2011 August 1900 (has links)
Finding near-duplicate documents is an interesting problem but the existing methods are not suitable for large scale datasets and memory constrained systems. In this work, we developed approaches that tackle the problem of finding near-duplicates while improving query performance and using less memory. We then carried out an evaluation of our method on a dataset of 70M web documents, and showed that our method works really well. The results indicated that our method could achieve a reduction in space by a factor of 5 while improving the query time by a factor of 4 with a recall of 0.95 for finding all near-duplicates when the dataset is in memory. With the same recall and same reduction in space, we could achieve an improvement in query-time by a factor of 4.5 while finding first the near-duplicate for an in memory dataset. When the dataset was stored on a disk, we could achieve an improvement in performance by 7 times for finding all near-duplicates and by 14 times when finding the first near-duplicate.
379

Selection of antigens for antibody-based proteomics

Berglund, Lisa January 2008 (has links)
The human genome is predicted to contain ~20,500 protein-coding genes. The encoded proteins are the key players in the body, but the functions and localizations of most proteins are still unknown. Antibody-based proteomics has great potential for exploration of the protein complement of the human genome, but there are antibodies only to a very limited set of proteins. The Human Proteome Resource (HPR) project was launched in August 2003, with the aim to generate high-quality specific antibodies towards the human proteome, and to use these antibodies for large-scale protein profiling in human tissues and cells. The goal of the work presented in this thesis was to evaluate if antigens can be selected, in a high-throughput manner, to enable generation of specific antibodies towards one protein from every human gene. A computationally intensive analysis of potential epitopes in the human proteome was performed and showed that it should be possible to find unique epitopes for most human proteins. The result from this analysis was implemented in a new web-based visualization tool for antigen selection. Predicted protein features important for antigen selection, such as transmembrane regions and signal peptides, are also displayed in the tool. The antigens used in HPR are named protein epitope signature tags (PrESTs). A genome-wide analysis combining different protein features revealed that it should be possible to select unique, 50 amino acids long PrESTs for ~80% of the human protein-coding genes. The PrESTs are transferred from the computer to the laboratory by design of PrEST-specific PCR primers. A study of the success rate in PCR cloning of the selected fragments demonstrated the importance of controlled GC-content in the primers for specific amplification. The PrEST protein is produced in bacteria and used for immunization and subsequent affinity purification of the resulting sera to generate mono-specific antibodies. The antibodies are tested for specificity and approved antibodies are used for tissue profiling in normal and cancer tissues. A large-scale analysis of the success rates for different PrESTs in the experimental pipeline of the HPR project showed that the total success rate from PrEST selection to an approved antibody is 31%, and that this rate is dependent on PrEST length. A second PrEST on a target protein is somewhat less likely to succeed in the HPR pipeline if the first PrEST is unsuccessful, but the analysis shows that it is valuable to select several PrESTs for each protein, to enable generation of at least two antibodies, which can be used to validate each other. / QC 20100705
380

Using semantic similarity measures across Gene Ontology to predict protein-protein interactions

Helgadóttir, Hanna Sigrún January 2005 (has links)
Living cells are controlled by proteins and genes that interact through complex molecular pathways to achieve a specific function. Therefore, determination of protein-protein interaction is fundamental for the understanding of the cell’s lifecycle and functions. The function of a protein is also largely determined by its interactions with other proteins. The amount of protein-protein interaction data available has multiplied by the emergence of large-scale technologies for detecting them, but the drawback of such measures is the relatively high amount of noise present in the data. It is time consuming to experimentally determine protein-protein interactions and therefore the aim of this project is to create a computational method that predicts interactions with high sensitivity and specificity. Semantic similarity measures were applied across the Gene Ontology terms assigned to proteins in S. cerevisiae to predict protein-protein interactions. Three semantic similarity measures were tested to see which one performs best in predicting such interactions. Based on the results, a method that predicts function of proteins in connection with connectivity was devised. The results show that semantic similarity is a useful measure for predicting protein-protein interactions.

Page generated in 0.0754 seconds