• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 54
  • 14
  • 12
  • 10
  • 6
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 124
  • 124
  • 48
  • 22
  • 22
  • 19
  • 17
  • 16
  • 16
  • 15
  • 13
  • 13
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Investigating the relationship between the business performance management framework and the Malcolm Baldrige National Quality Award framework.

Hossain, Muhammad Muazzem 08 1900 (has links)
The business performance management (BPM) framework helps an organization continuously adjust and successfully execute its strategies. BPM helps increase flexibility by providing managers with an early alert about changes and, as a result, allows faster response to such changes. The Malcolm Baldrige National Quality Award (MBNQA) framework provides a basis for self-assessment and a systems perspective for managing an organization's key processes for achieving business results. The MBNQA framework is a more comprehensive framework and encapsulates the underlying constructs in the BPM framework. The objectives of this dissertation are fourfold: (1) to validate the underlying relationships presented in the 2008 MBNQA framework, (2) to explore the MBNQA framework at the dimension level, and develop and test constructs measured at that level in a causal model, (3) to validate and create a common general framework for the business performance model by integrating the practitioner literature with basic theory including existing MBNQA theory, and (4) to integrate the BPM framework and the MBNQA framework into a new framework (BPM-MBNQA framework) that can guide organizations in their journey toward achieving and sustaining competitive and strategic advantages. The purpose of this study is to achieve these objectives by means of a combination of methodologies including literature reviews, expert opinions, interviews, presentation feedbacks, content analysis, and latent semantic analysis. An initial BPM framework was developed based on the reviews of literature and expert opinions. There is a paucity of academic research on business performance management. Therefore, this study reviewed the practitioner literature on BPM and from the numerous organization-specific BPM models developed a generic, conceptual BPM framework. With the intent of obtaining valuable feedback, this initial BPM framework was presented to Baldrige Award recipients (BARs) and selected academicians from across the United States who participated in the Fall Summit 2007 held at Caterpillar Financial Headquarter in Nashville, TN on October 1 and 2, 2007. Incorporating the feedback from that group allowed refining and improving the proposed BPM framework. This study developed a variant of the traditional latent semantic analysis (LSA) called causal latent semantic analysis (cLSA) that enables us to test causal models using textual data. This method was used to validate the 2008 MBNQA framework based on article abstracts on the Baldrige Award and program published in both practitioner and academic journals from 1987 to 2009. The cLSA was also used to validate the BPM framework using the full body text data from all articles published in the practitioner journal entitled the Business Performance Management Magazine since its inception in 2003. The results provide the first cLSA study of these frameworks. This is also the first study to examine all the causal relationships within the MBNQA and BPM frameworks.
42

[en] THE PRONOUN LHE: IN THE PHRASE STRUCTURE / [pt] O PRONOME LHE: SEU FUNCIONAMENTO NA ESTRUTURA FRÁSICA

DENILSON PEREIRA DE MATOS 21 January 2004 (has links)
[pt] Esta dissertação examina as classificações atribuídas ao pronome lhe, com base na gramática de valências. Segundo a tradição gramatical a nomeação do pronome é uma etapa suficiente como proposta de ensino. Neste trabalho, refuta- se esta conduta, à medida que se defende que - com relação à aprendizagem - importa para o aluno compreender, sobretudo, o funcionamento do pronome lhe na estrutura frásica. Do mesmo modo, é necessário depreender que este pronome pode desempenhar funções sintáticas distintas, as quais são justificadas através do comportamento do lhe e não por uma determinação meramente classificatória. O corpus selecionado privilegiou exemplos reunidos em livros didáticos, em gramáticas pedagógicas, no corpus do CETEMPúblico e do CETEMFolhaNILC/São Carlos. O resultado da análise revela que o pronome lhe é um genuíno complemento e que não basta classificar o elemento em questão, mas entender seu papel básico desempenhado em qualquer contexto. / [en] This dissertation examines the classifications which are attributed to the pronoun lhe with the base on the basis Valence Grammar. According to the grammatical tradition the nomination for a pronoun is an adequate step as a teaching proposal However, this dissertation refutes this procedure by defending that, as far as learning concerned, what matters to the student is above all understand the function of the pronoun lhe in the phrase structure. At the same time, it is necessary to consider that this pronoun may have distinct syntactical functions, which are justified by structural collocation and not only by one merely classifying determination. The selected corpus favoured examples from didactic books, Pedagogical grammars, in the corpus of Public CETEM and of NIL/São Carlos. The result of the data analysis reveals that the pronoun lhe is a genuine complement and it is not sufficient to classify the aforementioned particle on debate, but to understand its role in the context it which it occurs.
43

Sparsamkeit und Geiz, Grosszügigkeit und Verschwendung : ethische Konzepte im Spiegel der Sprache

Malmqvist, Anita January 2000 (has links)
The object of this study is to analyse the lexemes and phraseological units that constitute the semantic fields employed in naming four abstract domains, greed, thrift, generosity, and extra­vagance that make up the ethical concept <Attitude to Ownership> in German. On the assump­tion that ideas are accessible to us through the lexicalised items of a language, recent theories in the field of semantic analysis and conceptualisation were applied to the source material. In each domain key words were identified and their definitions in modern and historical dictionaries were analysed. Various dimensions of meaning, which proved to be inherent in the lexical items, emerged from this analysis. The oppositions a/o (action directed to others vs. to oneself), right/wrong (virtues vs. vices) and too much/ too little vs. the ideal mean were established as central. To achieve a more precise description of meaning tentative explications of cognitive levels were proposed. By means of these the underlying ideas, as they were reflected in the lexical units, could be described. The analysis showed greater variation and expressivity in words, idioms, and proverbs referring to the two vices compared to the virtues. Furthermore, a diachronic study produced evidence of semantic and conceptual changes. On the basis of such observations conclusions could be drawn about changes in the ethical system. The data derived from a contrastive corpus analysis of the German and Swedish key words showed numerous similarities as well as some conspicuous differences in the conceptualisation and valuation of attitudes pertaining to the four abstract domains. Moreover, the key words denoting the two virtues showed a clear domination in frequency, indicating that these are more central conceptual categories in today's society than the vices. An ongoing shift in meaning could be established for the key words naming the latter. Applying modern theories of metaphor and metonymy the experiential basis of meaning and thought was explored, showing that the structures forming the ethical concepts studied in this work are grounded in experiences of a physical and socio- cultural nature. The metaphorical concept ILLNESS emerged as a common source domain for the two vices, while the PATH- concept was shown to form the basis of metaphors expressing the o-virtue but not the a-virtue. Among the numerous métonymie concepts HAND proved to be a characteristic of all four domains. / digitalisering@umu
44

Visualization of Knowledge Spaces to Enable Concurrent, Embedded and Transformative Input to Knowledge Building Processes

Teplovs, Christopher 01 September 2010 (has links)
This thesis focuses on the creation of a systems architecture to help inform development of next generation knowledge-building environments. The architectural model consists of three components: an infrastructure layer, a discourse layer, and a visualization layer. The Knowledge Space Visualizer (KSV), which defines the top visualization layer, is a prototypic system for showing reconstructed representations of discourse-based artifacts and facilitating assessment in light of patterns of interactivity of participants and their ideas. The KSV uses Latent Semantic Analysis to extend techniques from Social Network Analysis, making it possible to infer relationships among note contents. Thus idea networks can be studied in conjunction with social networks in online discourse. Further, benchmark corpora can be used to determine knowledge advances, and systems of interactivity leading to them. Results can then provide feedback to students and teachers to support them in obtaining continually higher level achievements. In addition to visual representations, the KSV provides quantitative network metrics such as degree and density. Data drawn from 9- and 10-year-old students working on a six-week unit on optics were used to illustrate some of the functionality of the KSV. Three studies show ways in which new visualizations can be used: (a) to highlight relationships among notes, (b) as a way of tracking the development of discourse over time, and (c) as an assessment tool. Implications for the design of knowledge building environments, assessment tools, and design-based research are discussed.
45

Visualization of Knowledge Spaces to Enable Concurrent, Embedded and Transformative Input to Knowledge Building Processes

Teplovs, Christopher 01 September 2010 (has links)
This thesis focuses on the creation of a systems architecture to help inform development of next generation knowledge-building environments. The architectural model consists of three components: an infrastructure layer, a discourse layer, and a visualization layer. The Knowledge Space Visualizer (KSV), which defines the top visualization layer, is a prototypic system for showing reconstructed representations of discourse-based artifacts and facilitating assessment in light of patterns of interactivity of participants and their ideas. The KSV uses Latent Semantic Analysis to extend techniques from Social Network Analysis, making it possible to infer relationships among note contents. Thus idea networks can be studied in conjunction with social networks in online discourse. Further, benchmark corpora can be used to determine knowledge advances, and systems of interactivity leading to them. Results can then provide feedback to students and teachers to support them in obtaining continually higher level achievements. In addition to visual representations, the KSV provides quantitative network metrics such as degree and density. Data drawn from 9- and 10-year-old students working on a six-week unit on optics were used to illustrate some of the functionality of the KSV. Three studies show ways in which new visualizations can be used: (a) to highlight relationships among notes, (b) as a way of tracking the development of discourse over time, and (c) as an assessment tool. Implications for the design of knowledge building environments, assessment tools, and design-based research are discussed.
46

Completeness of Fact Extractors and a New Approach to Extraction with Emphasis on the Refers-to Relation

Lin, Yuan 07 August 2008 (has links)
This thesis deals with fact extraction, which analyzes source code (and sometimes related artifacts) to produce extracted facts about the code. These facts may, for example, record where in the code variables are declared and where they are used, as well as related information. These extracted facts are typically used in software reverse engineering to reconstruct the design of the program. This thesis has two main parts, each of which deals with a formal approach to fact extraction. Part 1 of the thesis deals with the question: How can we demonstrate that a fact extractor actually does its job? That is, does the extractor produce the facts that it is supposed to produce? This thesis builds on the concept of semantic completeness of a fact extractor, as defined by Tom Dean et al, and further defines source, syntax and compiler completeness. One of the contributions of this thesis is to show that in particular important cases (when the extractor is deterministic and its front end is idempotent), there is an efficient algorithm to determine if the extractor is compiler complete. This result is surprising, considering that in general it is undecidable if two programs are semantically equivalent, and it would seem that source code and its corresponding extracted facts are each essentially programs that are to be proved to be equivalent or at least sufficiently similar. The larger part of the thesis, Part 2, presents Algebraic Refers-to Analysis (ARA), a new approach to fact extraction with emphasis on the Refers-to relation. ARA provides a framework for specifying fact extraction, based on a three-step pipeline: (1) basic (lexical and syntactic) extraction, (2) a normalization step and (3) a binding step. For practical programming languages, these three steps are repeated, in stages and phases, until the Refers-to relation is computed. During the writing of this thesis, ARA pipelines for C, Java, C++, Fortran, Pascal and Ada have been designed. A prototype fact extractor for the C language has been created. Validating ARA means to demonstrate that ARA pipelines satisfy the programming language standards such as ISO C++ standard. In other words, we show that ARA phases (stages and formulas) are correctly transcribed from the rules in the language standard. Comparing with the existing approaches such as Attribute Grammar, ARA has the following advantages. First, ARA formulas are concise, elegant and more importantly, insightful. As a result, we have some interesting discovery about the programming languages. Second, ARA is validated based on set theory and relational algebra, which is more reliable than exhaustive testing. Finally, ARA formulas are supported by existing software tools such as database management systems and relational calculators. Overall, the contributions of this thesis include 1) the invention of the concept of hierarchy of completeness and the automatic testing of completeness, 2) the use of the relational data model in fact extraction, 3) the invention of Algebraic Refers-to Relation Analysis (ARA) and 4) the discovery of some interesting facts of programming languages.
47

Completeness of Fact Extractors and a New Approach to Extraction with Emphasis on the Refers-to Relation

Lin, Yuan 07 August 2008 (has links)
This thesis deals with fact extraction, which analyzes source code (and sometimes related artifacts) to produce extracted facts about the code. These facts may, for example, record where in the code variables are declared and where they are used, as well as related information. These extracted facts are typically used in software reverse engineering to reconstruct the design of the program. This thesis has two main parts, each of which deals with a formal approach to fact extraction. Part 1 of the thesis deals with the question: How can we demonstrate that a fact extractor actually does its job? That is, does the extractor produce the facts that it is supposed to produce? This thesis builds on the concept of semantic completeness of a fact extractor, as defined by Tom Dean et al, and further defines source, syntax and compiler completeness. One of the contributions of this thesis is to show that in particular important cases (when the extractor is deterministic and its front end is idempotent), there is an efficient algorithm to determine if the extractor is compiler complete. This result is surprising, considering that in general it is undecidable if two programs are semantically equivalent, and it would seem that source code and its corresponding extracted facts are each essentially programs that are to be proved to be equivalent or at least sufficiently similar. The larger part of the thesis, Part 2, presents Algebraic Refers-to Analysis (ARA), a new approach to fact extraction with emphasis on the Refers-to relation. ARA provides a framework for specifying fact extraction, based on a three-step pipeline: (1) basic (lexical and syntactic) extraction, (2) a normalization step and (3) a binding step. For practical programming languages, these three steps are repeated, in stages and phases, until the Refers-to relation is computed. During the writing of this thesis, ARA pipelines for C, Java, C++, Fortran, Pascal and Ada have been designed. A prototype fact extractor for the C language has been created. Validating ARA means to demonstrate that ARA pipelines satisfy the programming language standards such as ISO C++ standard. In other words, we show that ARA phases (stages and formulas) are correctly transcribed from the rules in the language standard. Comparing with the existing approaches such as Attribute Grammar, ARA has the following advantages. First, ARA formulas are concise, elegant and more importantly, insightful. As a result, we have some interesting discovery about the programming languages. Second, ARA is validated based on set theory and relational algebra, which is more reliable than exhaustive testing. Finally, ARA formulas are supported by existing software tools such as database management systems and relational calculators. Overall, the contributions of this thesis include 1) the invention of the concept of hierarchy of completeness and the automatic testing of completeness, 2) the use of the relational data model in fact extraction, 3) the invention of Algebraic Refers-to Relation Analysis (ARA) and 4) the discovery of some interesting facts of programming languages.
48

Generalized Hebbian Algorithm for Dimensionality Reduction in Natural Language Processing

Gorrell, Genevieve January 2006 (has links)
The current surge of interest in search and comparison tasks in natural language processing has brought with it a focus on vector space approaches and vector space dimensionality reduction techniques. Presenting data as points in hyperspace provides opportunities to use a variety of welldeveloped tools pertinent to this representation. Dimensionality reduction allows data to be compressed and generalised. Eigen decomposition and related algorithms are one category of approaches to dimensionality reduction, providing a principled way to reduce data dimensionality that has time and again shown itself capable of enabling access to powerful generalisations in the data. Issues with the approach, however, include computational complexity and limitations on the size of dataset that can reasonably be processed in this way. Large datasets are a persistent feature of natural language processing tasks. This thesis focuses on two main questions. Firstly, in what ways can eigen decomposition and related techniques be extended to larger datasets? Secondly, this having been achieved, of what value is the resulting approach to information retrieval and to statistical language modelling at the ngram level? The applicability of eigen decomposition is shown to be extendable through the use of an extant algorithm; the Generalized Hebbian Algorithm (GHA), and the novel extension of this algorithm to paired data; the Asymmetric Generalized Hebbian Algorithm (AGHA). Several original extensions to the these algorithms are also presented, improving their applicability in various domains. The applicability of GHA to Latent Semantic Analysisstyle tasks is investigated. Finally, AGHA is used to investigate the value of singular value decomposition, an eigen decomposition variant, to ngram language modelling. A sizeable perplexity reduction is demonstrated.
49

Understanding and Improving Object-Oriented Software Through Static Software Analysis

Irwin, Warwick Allan January 2007 (has links)
Software engineers need to understand the structure of the programs they construct. This task is made difficult by the intangible nature of software, and its complexity, size and changeability. Static analysis tools can help by extracting information from source code and conveying it to software engineers. However, the information provided by typical tools is limited, and some potentially rich veins of information - particularly metrics and visualisations - are under-utilised because developers cannot easily acquire or make use of the data. This thesis documents new tools and techniques for static analysis of software. It addresses the problem of generating parsers directly from standard grammars, thus avoiding the com-mon practice of customising grammars to comply with the limitations of a given parsing al-gorithm, typically LALR(1). This is achieved by a new parser generator that applies a range of bottom-up parsing algorithms to produce a hybrid parsing automaton. Consequently, we can generate more powerful deterministic parsers - up to and including LR(k) - without incurring the combinatorial explosion that makes canonical LR(k) parsers impractical. The range of practical parsers is further extended to include GLR, which was originally developed for natural language parsing but is shown here to also have advantages for static analysis of programming languages. This emphasis on conformance to standard grammars im-proves the rigour of static analysis tools and allows clearer definition and communication of derived information, such as metrics. Beneath the syntactic structure of software (exposed by parsing) lies the deeper semantic structure of declarations, scopes, classes, methods, inheritance, invocations, and so on. In this work, we present a new tool that performs semantic analysis on parse trees to produce a comprehensive semantic model suitable for processing by other static analysis tools. An XML pipeline approach is used to expose the syntactic and semantic models of the software and to derive metrics and visualisations. The approach is demonstrated producing several types of metrics and visualisations for real software, and the value of static analysis for informing software engineering decisions is shown.
50

英文介系詞片語定位與英文介系詞推薦 / Attachment of English prepositional phrases and suggestions of English prepositions

蔡家琦, Tsai, Chia Chi Unknown Date (has links)
英文介系詞在句子裡所扮演的角色通常是用來使介系詞片語更精確地補述上下文,英文的母語使用者可以很直覺地使用。然而電腦不瞭解語義,因此不容易判斷介系詞修飾對象;非英文母語使用者則不容易直覺地使用正確的介系詞。所以本研究將專注於介系詞片語定位與介系詞推薦的議題。 在本研究將這二個介系詞議題抽象化為一個決策問題,並提出一個一般化的解決方法。這二個問題共通的部分在於動詞片語,一個簡單的動詞片語含有最重要的四個中心詞(headword):動詞、名詞一、介系詞和名詞二。由這四個中心詞做為出發點,透過WordNet做階層式的選擇,在大量的案例中尋找語義上共通的部分,再利用機器學習的方法建構一般化的模型。此外,針對介系詞片語定的問題,我們挑選較具挑戰性介系詞做實驗。 藉由使用真實生活語料,我們的方法處理介系詞片語定位的問題,比同樣考慮四個中心詞的最大熵值法(Max Entropy)好;但與考慮上下文的Stanford剖析器差不多。而在介系詞推薦的問題裡,較難有全面比較的對象,但我們的方法精準度可達到53.14%。 本研究發現,高層次的語義可以使分類器有不錯的分類效果,而透過階層式的選擇語義能使分類效果更佳。這顯示我們確實可以透過語義歸納一套準則,用於這二個介系詞的議題。相信成果在未來會對機器翻譯與文本校對的相關研究有所價值。 / This thesis focuses on problems of attachment of prepositional phrases (PPs) and problems of prepositional suggestions. Determining the correct PP attachment is not easy for computers. Using correct prepositions is not easy for learners of English as a second language. I transform the problems of PPs attachment and prepositional suggestion into an abstract model, and apply the same computational procedures to solve these two problems. The common model features four headwords, i.e., the verb, the first noun, the preposition, and the second noun in the prepositional phrases. My methods consider the semantic features of the headwords in WordNet to train classification models, and apply the learned models for tackling the attachment and suggestion problems. This exploration of PP attachment problems is special in that only those PPs that are almost equally possible to attach to the verb and the first noun were used in the study. The proposed models consider only four headwords to achieve satisfactory performances. In experiments for PP attachment, my methods outperformed a Maximum Entropy classifier which also considered four headwords. The performances of my methods and of the Stanford parsers were similar, while the Stanford parsers had access to the complete sentences to judge the attachments. In experiments for prepositional suggestions, my methods found the correct prepositions 53.14% of the time, which is not as good as the best performing system today. This study reconfirms that semantic information is instrument for both PP attachment and prepositional suggestions. High level semantic information helped to offer good performances, and hierarchical semantic synsets helped to improve the observed results. I believe that the reported results are valuable for future studies of PP attachment and prepositional suggestions, which are key components for machine translation and text proofreading.

Page generated in 0.0284 seconds