• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 2
  • 1
  • 1
  • Tagged with
  • 34
  • 34
  • 14
  • 10
  • 9
  • 8
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

社会的ルールの知識構造と社会認知的適応性 : 社会的道徳判断との関連による検討

吉澤, 寛之, Yoshizawa, Hiroyuki, 吉田, 俊和, Yoshida, Toshikazu 27 December 2004 (has links)
国立情報学研究所で電子化したコンテンツを使用している。
22

Deriving pilots’ knowledge structures for weather information: an evaluation of elicitation techniques

Raddatz, Kimberly R. January 1900 (has links)
Doctor of Philosophy / Department of Psychology / Richard J. Harris / Systems that support or require human interaction are generally easier to learn, use, and remember when their organization is consistent with the user’s knowledge and experiences (Norman, 1983; Roske-Hofstrand & Paap, 1986). Thus, in order for interface designers to truly design for the user, they must first have a way of deriving a representation of what the user knows about the domain of interest. The current study evaluated three techniques for eliciting knowledge structures for how General Aviation pilots think about weather information. Weather was chosen because of its varying implications for pilots of different levels of experience. Two elicitation techniques (Relationship Judgment and Card Sort) asked pilots to explicitly consider the relationship between 15 weather-related information concepts. The third technique, Prime Recognition Task, used response times and priming to implicitly reflect the strength of relationship between concepts in semantic memory. Techniques were evaluated in terms of pilot performance, conceptual structure validity, and required resources for employment. Validity was assessed in terms of the extent to which each technique identified differences in organization of weather information among pilots of different experience levels. Multidimensional scaling was used to transform proximity data collected by each technique into conceptual structures representing the relationship between concepts. Results indicated that Card Sort was the technique that most consistently tapped into knowledge structure affected by experience. Only conceptual structures based on Card Sort data were able to be used to both discriminate between pilots of different experience levels and accurately classify experienced pilots as “experienced”. Additionally, Card Sort was the most efficient and effective technique to employ in terms of preparation time, time on task, flexibility, and face validity. The Card Sort provided opportunities for deliberation, revision, and visual feedback that allowed the pilots to engage in a deeper level of processing at which experience may play a stronger role. Relationship Judgment and Prime Recognition Task characteristics (e.g., time pressure, independent judgments) may have motivated pilots to rely on a more shallow or text-based level of processing (i.e., general semantic meaning) that is less affected by experience. Implications for menu structure design and assessment are discussed.
23

A Knowledge Perspective of Strategic Alliances and Management of Biopharmaceutical Innovation: Evolving Research Paradigms

Allarakhia, Minna January 2007 (has links)
Information from the Human Genome Project is being integrated into the drug discovery and development process to permit novel drug targets to be identified, clinical trial testing to be made more efficient, and efficacious therapeutics to be approved and made widely available. Knowledge of the genome will allow for the description and quantification of disease and susceptibility to disease as informational errors or deficits. The creation and application of knowledge occur through cooperative or competitive interactions, often reflecting the perceived value of the knowledge. The public or private value of the knowledge, both for itself and for potential applications, can be determined through an understanding of the classification and characterization of this knowledge, as well as the position of the knowledge within the drug discovery and development pipeline. The transformation of knowledge from a purely public good to a quasi-private good has highlighted the need for balance between incentives for the market provision of scientific and technological knowledge by an innovator and incentives for the market provision of incremental knowledge by a follow-on developer. It has been suggested that a patent system developed for a discrete model of innovation may no longer be optimal for an information-based, cumulative model of innovation. Consequently, it is necessary to reanalyze models of intellectual property protection and strategies of knowledge sharing in biopharmaceutical discovery research. Under certain conditions, the biotech commons is an efficient institution that can preserve downstream opportunities for multiple researchers fairly and efficiently. A framework for classifying and characterizing discovery knowledge is developed in this research and the role of research consortia in preserving the biotech commons is analyzed. This study also addresses the value of pooling versus unilaterally holding knowledge, the benefits associated with appropriating from the commons, the role of knowledge characteristics in bargaining between licensor and licensee, and the overall management of the biotech commons.
24

A Knowledge Perspective of Strategic Alliances and Management of Biopharmaceutical Innovation: Evolving Research Paradigms

Allarakhia, Minna January 2007 (has links)
Information from the Human Genome Project is being integrated into the drug discovery and development process to permit novel drug targets to be identified, clinical trial testing to be made more efficient, and efficacious therapeutics to be approved and made widely available. Knowledge of the genome will allow for the description and quantification of disease and susceptibility to disease as informational errors or deficits. The creation and application of knowledge occur through cooperative or competitive interactions, often reflecting the perceived value of the knowledge. The public or private value of the knowledge, both for itself and for potential applications, can be determined through an understanding of the classification and characterization of this knowledge, as well as the position of the knowledge within the drug discovery and development pipeline. The transformation of knowledge from a purely public good to a quasi-private good has highlighted the need for balance between incentives for the market provision of scientific and technological knowledge by an innovator and incentives for the market provision of incremental knowledge by a follow-on developer. It has been suggested that a patent system developed for a discrete model of innovation may no longer be optimal for an information-based, cumulative model of innovation. Consequently, it is necessary to reanalyze models of intellectual property protection and strategies of knowledge sharing in biopharmaceutical discovery research. Under certain conditions, the biotech commons is an efficient institution that can preserve downstream opportunities for multiple researchers fairly and efficiently. A framework for classifying and characterizing discovery knowledge is developed in this research and the role of research consortia in preserving the biotech commons is analyzed. This study also addresses the value of pooling versus unilaterally holding knowledge, the benefits associated with appropriating from the commons, the role of knowledge characteristics in bargaining between licensor and licensee, and the overall management of the biotech commons.
25

Redesign of Library Workflows: Experimental Models for Electronic Resource Description

Calhoun, Karen January 2000 (has links)
This paper explores the potential for and progress of a gradual transition from a highly centralized model for cataloging to an iterative, collaborative, and broadly distributed model for electronic resource description. The author's purpose is to alert library managers to some experiments underway and to help them conceptualize new methods for defining, planning, and leading the e-resource description process under moderate to severe time and staffing constraints. To build a coherent library system for discovery and retrieval of networked resources, librarians and technologists are experimenting with team-based efforts and new workflows for metadata creation. In an emerging new service model for e-resource description, metadata can come from selectors, public service librarians, information technology staff, authors, vendors, publishers, and catalogers. Arguing that e-resource description demands a level of cross-functional collaboration and creative problem-solving that is often constrained by libraries' functional organizational structures, the author calls for reuniting functional groups into virtual teams that can integrate the e-resource description process, speed up operations, and provide better service. The paper includes an examination of the traditional division of labor for producing catalogs and bibliographies, a discussion of experiments that deploy a widely distributed e-resource description process (e.g., the use of CORC at Cornell and Brown), and an exploration of the results of a brief study of selected ARL libraries' e-resource discovery systems.
26

Searching the long tail: Hidden structure in social tagging

Tonkin, Emma January 2006 (has links)
In this paper we explore a method of decomposition of compound tags found in social tagging systems and outline several results, including improvement of search indexes, extraction of semantic information, and benefits to usability. Analysis of tagging habits demonstrates that social tagging systems such as del.icio.us and flickr include both formal metadata, such as geotags, and informally created metadata, such as annotations and descriptions. The majority of tags represent informal metadata; that is, they are not structured according to a formal model, nor do they correspond to a formal ontology. Statistical exploration of the main tag corpus demonstrates that such searches use only a subset of the available tags; for example, many tags are composed as ad hoc compounds of terms. In order to improve accuracy of searching across the data contained within these tags, a method must be employed to decompose compounds in such a way that there is a high degree of confidence in the result. An approach to decomposition of English-language compounds, designed for use within a small initial sample tagset, is described. Possible decompositions are identified from a generous wordlist, subject to selective lexicon snipping. In order to identify the most likely, a Bayesian classifier is used across term elements. To compensate for the limited sample set, a word classifier is employed and the results classified using a similar method, resulting in a successful classification rate of 88%, and a false negative rate of only 1%.
27

A Comparison of Web Resource Access Experiments:Planning for the New Millennium

Greenberg, Jane January 2000 (has links)
Over the last few years the bibliographic control community has initiated a series of experiments that aim to improve access to the growing number of valuable information resources that are increasingly being placed on World Wide Web (here after referred to as Web resources). Much has been written about these experiments, mainly describing their implementation and features, and there has been some evaluative reporting, but there has been little comparison among these initiatives. The research reported on in this paper addresses this limitation by comparing five leading experiments in this area. The objective was to identify characteristics of success and considerations for improvement in experiments providing access to Web resources via bibliographic control methods. The experiments examined include: OCLC's CORC project; UKOLN's BIBLINK, ROADS, and DESIRE projects; and the NORDIC project. The research used a multi-case study methodology and a framework comprised of five evaluation criteria that included the experiment's organizational structure, reception, duration, application of computing technology, and use of human resources. This paper defines the Web resource access experimentation environment, reviews the study's research methodology, and highlights key findings. The paper concludes by initiating a strategic plan and by inviting conference participants to contribute their ideas and expertise to an effort will improve experimental initiatives that ultimately aim to improve access to Web resources in the new Millennium.
28

Extending MARC for Bibliographic Control in the Web Environment:Challenges and Alternatives

McCallum, Sally January 2000 (has links)
This paper deconstructs the "MARC format" and similar newer tools like DC, XML, and RDF, separating structural issues from content-driven issues. Against that it examines the pressures from new types of digital resources, the responses to these pressures in format and content terms, and the transformations that may take place. The conflicting desires coming from users and librarians, the plethora of solutions to problems that constantly appear (some of which just might work), and the traditional access expectations are considered. Footnotes There are a large number of terms being used in the broader information community that often mean approximately the same thing, but relate concepts to the different backgrounds of the players. For example librarians are sometimes confused that metadata is something new and a replacement for either cataloging or MARC. Metadata is cataloging and not MARC. In this article terms based on library specialist terminology are used, with occasional use of alternative terms indicated below, depending on context. No difference in meaning is intended by the use of alternative terminology . The descriptions of the terms are indicative, not strict. cataloging data or cataloging content = metadata - used broadly, in this context, for all data (descriptive, administrative, and structural) that relates to the resources being described. content rules - rules for formulation of the data including controlled lists and codes. data elements - the individual identifiable pieces of cataloging data (e.g., name, title, subtitle) and including elements that are often called attributes or qualifiers (since generally this paper does not need to isolate data elements in to subtypes). relationships - the semantics that relate data elements, e.g., name is author of title, title has subtitle. content rules - the rules for formulating data element content structure = syntax - the physical arrangement of parts of an entity record - the bundle of information that describes a resource format = DTD - a defined specification of structure and markup markup = tag set = content designation - a system of symbols used to identify in some way the following data. ANSI/NISO Z39.2, Record Interchange Format, and ISO 2709, Format for Data Interchange. The two standards are essentially identical in specification. ANSI/NISO has a few provisions where the ISO standard is not specific, but there is no conflict between the two standards. Functional Requirements for Bibliographic Records. IFLA Study Group on the Functional Requirements for the Bibliographic Record. Munich, Saur, 1998. ISO 8879, Standardized General Markup Language (SGML).
29

Improving Metacomprehension And Learning Through Graduated Concept Mod

Kring, Eleni 01 January 2004 (has links)
Mental model development, deeper levels of information processing, and elaboration are critical to learning. More so, individuals' metacomprehension accuracy is integral to making improvements to their knowledge base. In other words, without an accurate perception of their knowledge on a topic, learners may not know that knowledge gaps or misperceptions exist and, thus, would be less likely to correct them. Therefore, this study offered a dual-process approach that aimed at enhancing metacomprehension. One path aimed at advancing knowledge structure development and, thus, mental model development. The other focused on promoting a deeper level of information processing through processes like elaboration. It was predicted that this iterative approach would culminate in improved metacomprehension and increased learning. Accordingly, using the Graduated Concept Model Development (GCMD) approach, the role of learner-generated concept model development in facilitating metacomprehension and knowledge acquisition was examined. Concept maps have had many roles in the learning process as mental model assessment tools and advanced organizers. However, this study examined the process of concept model building as an effective training tool. Whereas, concept maps functioning as advanced organizers are certainly beneficial, it would seem that the benefits of having a learner examine and amend the current state of their knowledge through concept model development would prove more effective for learning. In other words, learners looking at an advanced organizer of the training material may feel assured that they have a thorough understanding of it. Only when they are forced to create a representation of the material would the gaps and misperceptions in their knowledge base likely be revealed. In short, advanced organizers seem to rely on recognition, where concept model development likely requires recalling and understanding 'how' and 'why' the interrelationships between concepts exist. Therefore, the Graduated Concept Model Development (GCMD) technique offered in this study was based on the theory that knowledge acquisition improves when learners integrate new information into existing knowledge, assign elaborated meanings to concepts, correct misperceptions, close knowledge gaps, and strengthen accurate connections between concepts by posing targeted questions against their existing knowledge structures. This study placed an emphasis on meaningful learning and suggested a process by which newly introduced concepts would be manipulated for the purpose of improving metacomprehension by strengthening accurate knowledge structures and mental model development, and through deeper and elaborated information processing. Indeed, central to improving knowledge deficiencies and misunderstandings is metacomprehension, and the constructing of concepts maps was hypothesized to improve metacomprehension accuracy and, thus, learning. This study was a one-factor between-groups design with concept map type as the independent variable, manipulated at four levels: no concept map, concept map as advanced organizer, learner-built concept map with feedback, and learner-built concept map without feedback. The dependent variables included performance (percent correct) on a declarative and integrative knowledge assessment, mental model development, and metacomprehension accuracy. Participants were 68 (34 female, 34 male, ages 18-35, mean age = 21.43) undergraduate students from a major southeastern university. Participants were randomly assigned to one of the four experimental conditions, and analysis revealed no significant differences between the groups. Upon arrival, participants were randomly assigned to one of the four experimental conditions. Participants then progressed through the three stages of the experiment. In Stage I, participants completed forms regarding informed consent, general biographical information, and task self-efficacy. In Stage II, participants completed the self-paced tutorial based on the Distributed Dynamic Decision Making (DDD) model, a simulated military command and control environment aimed at creating events to encourage team coordination and performance (for a detailed description, see Kleinman & Serfaty, 1989). The manner by which participants worked through the tutorial was determined by their assigned concept map condition. Upon finishing each module of the tutorial, participants then completed a metacomprehension prediction question. In Stage III, participants completed the computer-based knowledge assessment test, covering both declarative and integrative knowledge, followed by the metacomprehension postdiction question. Participants then completed the card sort task, as the assessment of mental model development. Finally, participants completed a general study survey and were debriefed as to the purpose of the study. The entire experiment lasted approximately 2 to 3 hours. Results indicated that the GCMD condition showed a stronger indication of metacomprehension accuracy, via prediction measures, compared with the other three conditions (control, advanced organizer, and feedback), and, specifically, significantly higher correlations than the other three conditions in declarative knowledge. Self-efficacy measures also indicated that the higher metacomprehension accuracy correlation observed in the GCMD condition was likely the result of the intervention, and not due to differences in self-efficacy in that group of participants. Likewise, the feedback and GCMD conditions led to significantly high correlations for metacomprehension accuracy based on levels of understanding on the declarative knowledge tutorial module (Module 1). The feedback condition also showed similar responses for the integrative knowledge module (Module 2). The advanced organizer, feedback, and GCMD conditions were also found to have significantly high correlation of self-reported postdiction of performance on the knowledge assessment and the actual results of the knowledge assessment results. However, results also indicated that there were no significant findings between the four conditions in mental model assessment and knowledge assessment. Nevertheless, results support the relevance of accurate mental model development in knowledge assessment outcomes. Retrospectively, two opposing factors may have complicated efforts to detect additional differences between groups. From one side, the experimental measures may not have been rigorous enough to filter out the effect from the intervention itself. Conversely, software usability issues and the resulting limitations in experimental design may have worked negatively against the two concept mapping conditions and, inadvertently, suppressed effects of the intervention. Future research in the GCMD approach will likely review cognitive workload, concept mapping software design, and the sensitivity of the measures involved.
30

Investigating The Reliability And Validity Of Knowledge Structure Evaluations: The Influence Of Rater Error And Rater Limitation

Harper-Sciarini, Michelle 01 January 2010 (has links)
The likelihood of conducting safe operations increases when operators ave effectively integrated their knowledge of the operation into meaningful relationships, referred to as knowledge structures (KSs). Unlike knowing isolated facts about an operation, well integrated KSs reflect a deeper understanding. It is, however, only the isolated facts that are often evaluated in training environments. To know whether an operator has formed well integrated KSs, KS evaluation methods must be employed. Many of these methods, however, require subjective, human-rated evaluations. These ratings are often prone to the negative influence of a rater's limitations such as rater biases and cognitive limitations; therefore, the extent to which KS evaluations are beneficial is dependent on the degree to which the rater's limitations can be mitigated. The main objective of this study was to identify factors that will mitigate rater limitations and test their influence on the reliability and validity of KS evaluations. These factors were identified through the delineation of a framework that represents how a rater's limitations will influence the cognitive processes that occur during the evaluation process. From this framework, one factor (i.e., operation knowledge), and three mitigation techniques (i.e., frame-of-reference training, reducing the complexity of the KSs, and providing referent material) were identified. Ninety-two participants rated the accuracy of eight KSs over a period of two days. Results indicated that reliability was higher after training. Furthermore, several interactions indicated that the benefits of domain knowledge, referent material, and reduced complexity existed within subsets of the participants. For example, reduced complexity only increased reliability among evaluators with less knowledge of the operation. Also, referent material increased reliability only for those who scored less complex KSs. Both the practical and theoretical implications of these results are provided.

Page generated in 0.0473 seconds