• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 218
  • 71
  • 32
  • 19
  • 10
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 526
  • 526
  • 146
  • 138
  • 122
  • 121
  • 118
  • 109
  • 102
  • 100
  • 96
  • 82
  • 79
  • 64
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
491

ONTOLOGY-DRIVEN SEMI-SUPERVISED MODEL FOR CONCEPTUAL ANALYSIS OF DESIGN SPECIFICATIONS

Shankar, Arunprasath 29 August 2014 (has links)
No description available.
492

Visualizing Epistemic Structures of Interrogative Domain Models

Hughes, Tracey D. 24 November 2008 (has links)
No description available.
493

LTCS-Report

Technische Universität Dresden 17 March 2022 (has links)
This series consists of technical reports produced by the members of the Chair for Automata Theory at TU Dresden. The purpose of these reports is to provide detailed information (e.g., formal proofs, worked out examples, experimental results, etc.) for articles published in conference proceedings with page limits. The topics of these reports lie in different areas of the overall research agenda of the chair, which includes Logic in Computer Science, symbolic AI, Knowledge Representation, Description Logics, Automated Deduction, and Automata Theory and its applications in the other fields.
494

Integración de argumentación rebatible y ontologías en el contexto de la web semántica : formalización y aplicaciones

Gómez, Sergio Alejandro 25 June 2009 (has links)
La World Wide Web actual está compuesta principalmente por documentos escritos para su presentación visual para usuarios humanos. Sin embargo, para obtener todo el potencial de la web es necesario que los programas de computadoras o agentes sean capaces de comprender la información presente en la web. En este sentido, la Web Semántica es una visión futura de la web donde la información tiene significado exacto, permitiendo así que las computadoras entiendan y razonen en base a la información hallada en la web. La Web Semántica propone resolver el problema de la asignación de semántica a los recursos web por medio de metadatos cuyo significado es dado a través de definiciones de ontologías, que son formalizaciones del conocimiento de un dominio de aplicación. El estándar del World Wide Web Consortium propone que las ontologías sean definidas en el lenguaje OWL, el cual se halla basado en las Lógicas para la Descripción. A pesar de que las definiciones de ontologías expresadas en Lógicas para la Descripción pueden ser procesadas por razonadores estándar, tales razonadores son incapaces de lidiar con ontologías inconsistentes. Los sistemas argumentativos constituyen una formalización del razonamiento rebatible donde se pone especial enfasis en la noción de argumento. Así, la construcción de argumentos permite que un agente obtenga conclusiones en presencia de información incompleta y potencialmente contradictoria. En particular, la Programación en Lógica Rebatible es un formalismo basado en la argumentación rebatible y la Programación en Lógica. En esta Disertación, la importancia de la definición de ontologías para poder llevar a cabo la realización de la iniciativa de la Web Semántica junto con la presencia de ontologías incompletas y potencialmente contradictorias motivó el desarrollo de un marco de razonamiento con las llamadas -ontologías. Investigaciones previas de otros autores, determinaron que un subconjunto de las Lógicas para la Descripción pueden ser traducidas efectivamente a un conjunto de la Programación en Lógica. Nuestra propuesta involucra asignar semántica a ontologías expresadas en Lógicas para la Descripción por medio de Programas Lógicos Rebatibles para lidiar con definiciones de ontologías inconsistentes en la Web Semántica. Esto es, dada una ontología OWL expresada en el lenguaje OWLDL, es posible construir una ontología DL equivalente expresada en las Lógicas para la Descripción. En el caso en que DL satisfaga ciertas restricciones, esta puede ser expresada como un programa DeLP P. Por lo tanto, dada una consulta acerca de la pertenencia de una instancia a a un cierto concepto C expresada con respecto a OWL, se realiza un análisis dialectico con respecto a P para determinar todas las razones a favor y en contra de la plausibilidad de la afirmación C(a). Por otro lado, la integración de datos es el problema de combinar datos residiendo en diferentes fuentes y el de proveer al usuario con una vista unificada de dichos datos. El problema de diseñar sistemas de integración de datos es particularmente importante en el contexto de aplicaciones en la Web Semántica donde las ontologías son desarrolladas independientemente unas de otras, y por esta razón pueden ser mutuamente inconsistentes. Dada una ontología, nos interesa conocer en que condiciones un individuo es una instancia de un cierto concepto. Como cuando se tienen varias ontologías, los mismos conceptos pueden tener nombres distintos para un mismo significado o aún nombres iguales para significados diferentes, para relacionar los conceptos entre dos ontologías diferentes se utilizaron reglas puente o de articulación. De esta manera, un concepto se corresponde a una vista sobre otros conceptos de otra ontología. Mostramos también bajo que condiciones la propuesta del razonamiento con -ontologías puede ser adaptada a los dos tipos de integración de ontologías global-as-view y local-as-view considerados en la literatura especializada. Además, analizamos las propiedades formales que se desprenden de este acercamiento novedoso al tratamiento de ontologías inconsistentes en la Web Semántica. Los principales resultados obtenidos son que, como la interpretación de -ontologías como Programas Lógicos Rebatibles es realizada a través de una función de transformación que preserva la semántica de las ontologías involucradas, los resultados obtenidos al realizar consultas son sensatos. También, mostramos que el operador presentado es además consistente y significativo. El acercamiento al razonamiento en presencia de ontologías inconsistentes brinda la posibilidad de abordar de una manera ecaz ciertos problemas de aplicacion del ámbito del comercio electrónico, donde el modelo de reglas de negocio puede ser especificado en términos de ontologías. Entonces, la capacidad de razonar frente a ontologías inconsistentes permite abordajes alternativos conceptualmente más claros, ya que es posible automatizar ciertas decisiones de negocios tomadas a la luz de un conjunto de reglas de negocio posiblemente inconsistentes expresadas como una o varias ontologías y tener un sistema capaz de brindar una explicación del porque se arribo a una conclusión determinada. En consecuencia, presentamos entonces una aplicación del razonamiento sobre ontologías inconsistentes por medio de la argumentación rebatible al modelado de formularios en la World Wide Web. La noción de los formularios como una manera de organizar y presentar datos ha sido utilizada desde el comienzo de la World Wide Web. Los formularios Web han evolucionado junto con el desarrollo de nuevos lenguajes de marcado, en los cuales es posible proveer guiones de validación como parte del código del formulario para verificar que el signifiado pretendido del formulario es correcto. Sin embargo, para el diseñador del formulario, parte de este significado pretendido frecuentemente involucra otras características que no son restricciones por sí mismas, sino más bien atributos emergentes del formulario, los cuales brindan conclusiones plausibles en el contexto de información incompleta y potencialmente contradictoria. Como el valor de tales atributos puede cambiar en presencia de nuevo conocimiento, los llamamos atributos rebatibles. Propusimos entonces extender los formularios web para incorporar atributos rebatibles como parte del conocimiento que puede ser codifiado por el diseñador del formulario, por medio de los llamados -formularios; dicho conocimiento puede ser especificado mediante un programa DeLP, y posteriormente, como una ontología expresada en Lógicas para la Descripción.
495

Towards Novelty-Resilient AI: Learning in the Open World

Trevor A Bonjour (18423153) 22 April 2024 (has links)
<p dir="ltr">Current artificial intelligence (AI) systems are proficient at tasks in a closed-world setting where the rules are often rigid. However, in real-world applications, the environment is usually open and dynamic. In this work, we investigate the effects of such dynamic environments on AI systems and develop ways to mitigate those effects. Central to our exploration is the concept of \textit{novelties}. Novelties encompass structural changes, unanticipated events, and environmental shifts that can confound traditional AI systems. We categorize novelties based on their representation, anticipation, and impact on agents, laying the groundwork for systematic detection and adaptation strategies. We explore novelties in the context of stochastic games. Decision-making in stochastic games exercises many aspects of the same reasoning capabilities needed by AI agents acting in the real world. A multi-agent stochastic game allows for infinitely many ways to introduce novelty. We propose an extension of the deep reinforcement learning (DRL) paradigm to develop agents that can detect and adapt to novelties in these environments. To address the sample efficiency challenge in DRL, we introduce a hybrid approach that combines fixed-policy methods with traditional DRL techniques, offering enhanced performance in complex decision-making tasks. We present a novel method for detecting anticipated novelties in multi-agent games, leveraging information theory to discern patterns indicative of collusion among players. Finally, we introduce DABLER, a pioneering deep reinforcement learning architecture that dynamically adapts to changing environmental conditions through broad learning approaches and environment recognition. Our findings underscore the importance of developing AI systems equipped to navigate the uncertainties of the open world, offering promising pathways for advancing AI research and application in real-world settings.</p>
496

Deep Learning Based Models for Cognitive Autonomy and Cybersecurity Intelligence in Autonomous Systems

Ganapathy Mani (8840606) 21 June 2022 (has links)
Cognitive autonomy of an autonomous system depends on its cyber module's ability to comprehend the actions and intent of the applications and services running on that system. The autonomous system should be able to accomplish this without or with limited human intervention. These mission-critical autonomous systems are often deployed in unpredictable and dynamic environments and are vulnerable to evasive cyberattacks. In particular, some of these cyberattacks are Advanced Persistent Threats where an attacker conducts reconnaissance for a long period time to ascertain system features, learn system defenses, and adapt to successfully execute the attack while evading detection. Thus an autonomous system's cognitive autonomy and cybersecurity intelligence depend on its capability to learn, classify applications (good and bad), predict the attacker's next steps, and remain operational to carryout the mission-critical tasks even under cyberattacks. In this dissertation, we propose novel learning and prediction models for enhancing cognitive autonomy and cybersecurity in autonomous systems. We develop (1) a model using deep learning along with a model selection framework that can classify benign and malicious operating contexts of a system based on performance counters, (2) a deep learning based natural language processing model that uses instruction sequences extracted from the memory to learn and profile the behavior of evasive malware, (3) a scalable deep learning based object detection model with data pre-processing assisted by fuzzy-based clustering, (4) fundamental guiding principles for cognitive autonomy using Artificial Intelligence (AI), (5) a model for privacy-preserving autonomous data analytics, and finally (6) a model for backup and replication based on combinatorial balanced incomplete block design in order to provide continuous availability in mission-critical systems. This research provides effective and computationally efficient deep learning based solutions for detecting evasive cyberattacks and increasing autonomy of a system from application-level to hardware-level. <br>
497

物體輪廓診斷性對形式內促發與跨形式促發之影響 / The effect of object contour diagnosticity on within-modal and cross-modal priming

王林宇, Linyu Lennel Wang Unknown Date (has links)
每個人遇到曾經看過的物體時,辨識該物體速度會增加(或辨識的正確率增加),這個現象稱為促發(priming)效果(簡稱P-P促發),同樣地,閱讀某物體的名稱(亦即文字)後,隔幾分鐘後再看該物體的圖形,這樣也會產生一種促發量(簡稱W-P促發)。許多研究都指出W-P促發是一種內隱(implicit)記憶,亦即,個體不需要刻意想起曾經看過的物件,促發效果仍會產生,而且P-P促發量都高於W-P促發量。然而,一些研究卻發現W-P促發量等於P-P促發量,顯然地,內隱記憶理論無法對於這種反直覺現象提出合理的解釋。 根據Paivio的雙重收錄理論(dual coding theory)(Paivio, 1986, 1991),辨識具體(concrete)名詞(例如,物體的名稱)會同時觸及(access)或激發兩種知識表徵,一種是涉及左腦的口語(verbal)表徵,另一種是涉及左腦與右腦的影像(image)表徵,而許多神經語言學研究皆指出,涉及處理具體名詞的神經機制不只包含左腦,同時也包含右腦,是以,閱讀具體名詞可能會觸及或激發物體的內在表徵,如果物體輪廓相當獨特或明顯,那麼閱讀此類型物體之名稱可能會觸及或激發此類物體的完整或重要表徵,致使W-P促發量等於P-P促發量現象。因此本研究試圖操弄物體輪廓診斷性來解釋W-P促發量等於P-P促發量之現象。 實驗一與實驗二分別以「圖形唸名」以及「圖形知覺辨識作業」來檢驗「物體輪廓診斷性」對促發的影響,結果顯示,「整體診斷性不高」物體(globally non-diagnostic object,簡稱GN類物體)的P-P促發量高於W-P促發量,和先前許多研究結果一致,然而,「整體診斷性高」物體(globally diagnostic object,簡稱GD類物體)的W-P促發量等於P-P促發量,顯示「物體輪廓診斷性」會影響促發的表現,同時也顯示閱讀GD類物體名稱可以觸及或激發GD類物體的整體或必要的知識表徵。 實驗三以分視野(divided visual field)呈現方式檢驗GD類物體的W-P促發之腦側化現象。本研究發現,顯著的W-P促發只出現在右腦,顯示W-P促發主要經由右腦來處理,根據Paivio的雙重收錄理論來推論,W-P促發之本質可能主要涉及以影像為基礎的(image-based)的知識表徵。 本研究同時操弄外顯記憶以檢驗外顯記憶是否污染W-P促發而導致W-P促發量等於P-P促發量,結果顯示,不管哪一種物體,P-P情境的再認記憶表現都顯著比W-P情境好,顯示GD類物體的促發表現與外顯記憶表現之間有單一分離(single dissociation)的關係,換言之,GD類物體之W-P促發並不受外顯記憶影響或污染。此外,實驗四顯示刻意的心像策略並不涉及W-P促發,顯示閱讀GD類物體名稱觸及GD類物體概念表徵是一種自動化而且相當快速的歷程。 / Implicit memory is usually assessed by showing repetition priming effects, when better performance in accuracy or response time for stimuli that have been previously encountered in comparison with performance with new stimuli. Picture-naming priming has been examined in studies that compared priming in participants who named pictures in the study phase and named those same pictures in the test phase (P-P condition) versus participants who read words that were the names of pictures in the study phase and named pictures cor-responding to those words in the test phase (W-P condition). Many studies demonstrated W-P priming is less than P-P priming in the picture-naming task and other similar object recognition tasks. However, in sharp contrast to the above studies, some studies reported equivalent magnitudes of P-P and W-P naming priming. Theories of implicit memory cannot account for the counter-intuitive phenomenon. According to Paivio’s dual-coding theory, the processing of abstract nouns (e.g., justice) relies on verbal code representations of the left cerebral hemisphere only, whereas concrete nouns (e.g., airplane) additionally access a second image-based processing system in the right cerebral hemisphere (Paivio, 1986, 1991). Paivio’s theory is supported by many researches on neurolinguistics. If the contour of an object is very distinctive or diagnostic, there should be the possible result that reading the name of the distinctive objects could access the whole or essential representation of the object. Following the idea, I manipulated global diagnosticity of object contour to examine whether P-P priming is always larger than W-P priming. I found P-P priming was equivalent to W-P priming on “globally diagnostic” (GD) objects, but the P-P priming was still larger than W-P priming on “glob-ally non-diagnostic” (GN) objects. This phenomenon appeared on both pic-ture-naming (Experiment 1) and picture perceptual-identification (Experiment 2) tasks. Experiment 3 showed that significant W-P priming appeared only when GD objects in the test phase were presented to the right cerebral hemi-sphere (in the left visual field). Based on the Paivio’s dual coding theory (Paivio, 1986, 1991) and research on neurolinguistics, the nature of W-P priming for GD objects was inferred to be image-based processing. Better explicit (conscious) memory performance (recognition memory) in P-P condition than that in W-P condition showed that equivalent priming across P-P and W-P conditions on GD objects was dissociated from the influence of conscious recognition memory. Experiment 4 showed that the intentional strategy of generating mental imagery was not necessarily involved in the W-P priming. These results suggested that reading names of globally diagnostic objects can access, automatically and unconsciously, the representation or essential features of globally diagnostic objects, and right cerebral hemisphere might be responsible for the processing.
498

Modélisation des signes dans les ontologies biomédicales pour l'aide au diagnostic. / Representation of the signs in the biomedical ontologies for the help to the diagnosis.

Donfack Guefack, Pierre Sidoine V. 20 December 2013 (has links)
Introduction : Établir un diagnostic médical fiable requiert l’identification de la maladie d’un patient sur la base de l’observation de ses signes et symptômes. Par ailleurs, les ontologies constituent un formalisme adéquat et performant de représentation des connaissances biomédicales. Cependant, les ontologies classiques ne permettent pas de représenter les connaissances liées au processus du diagnostic médical : connaissances probabilistes et connaissances imprécises et vagues. Matériel et méthodes : Nous proposons des méthodes générales de représentation des connaissances afin de construire des ontologies adaptées au diagnostic médical. Ces méthodes permettent de représenter : (a) Les connaissances imprécises et vagues par la discrétisation des concepts (définition de plusieurs catégories distinctes à l’aide de valeurs seuils ou en représentant les différentes modalités possibles). (b) Les connaissances probabilistes (les sensibilités et les spécificités des signes pour les maladies, et les prévalences des maladies pour une population donnée) par la réification des relations ayant des arités supérieures à 2. (c) Les signes absents par des relations et (d) les connaissances liées au processus du diagnostic médical par des règles SWRL. Un moteur d’inférences abductif et probabiliste a été conçu et développé. Ces méthodes ont été testées à l’aide de dossiers patients réels. Résultats : Ces méthodes ont été appliquées à trois domaines (les maladies plasmocytaires, les urgences odontologiques et les lésions traumatiques du genou) pour lesquels des modèles ontologiques ont été élaborés. L’évaluation a permis de mesurer un taux moyen de 89,34% de résultats corrects. Discussion-Conclusion : Ces méthodes permettent d’avoir un modèle unique utilisable dans le cadre des raisonnements abductif et probabiliste, contrairement aux modèles proposés par : (a) Fenz qui n’intègre que le mode de raisonnement probabiliste et (b) García-crespo qui exprime les probabilités hors du modèle ontologique. L’utilisation d’un tel système nécessitera au préalable son intégration dans le système d’information hospitalier pour exploiter automatiquement les informations du dossier patient électronique. Cette intégration pourrait être facilitée par l’utilisation de l’ontologie du système. / Introduction: Making a reliable medical diagnosis requires the identification of the patient’s disease based on the observation of signs. Moreover, ontologies provide an adequate and efficient formalism for medical knowledge representation. However, classical ontologies do not allow representing knowledge associated with medical reasoning such as probabilistic, imprecise, or vague knowledge. Material and methods: In the current work, general knowledge representation methods are proposed. They aim at building ontologies fitting to medical diagnosis. They allow to represent: (a) imprecise or vague knowledge by discretizing concepts (definition of several distinct categories thanks to threshold values or by representing the various possible modalities), (b) probabilistic knowledge (sensitivity, specificity and prevalence) by reification of relations of arity greater than 2, (c) absent signs by relations and (d) medical reasoning and reasoning on the absent signs by SWRL rules. An abductive reasoning engine and a probabilistic reasoning engine were designed and implemented. The methods were evaluated by use of real patient records. Results: These methods were applied to three domains (the plasma cell diseases, the dental emergencies and traumatic knee injuries) for which the ontological models were developed. The average rate of correct diagnosis was 89.34 %. Discussion-Conclusion: In contrast with other methods proposed by Fenz and García-crespo, the proposed methods allow to have a unique model which can be used both for abductive and probabilistic reasoning. The use of such a system will require beforehand its integration in the hospital information system for the automatic exploitation of the electronic patient record. This integration might be made easier by the use of the ontology on which the system is based.
499

n-TARP: A Random Projection based Method for Supervised and Unsupervised Machine Learning in High-dimensions with Application to Educational Data Analysis

Yellamraju Tarun (6630578) 11 June 2019 (has links)
Analyzing the structure of a dataset is a challenging problem in high-dimensions as the volume of the space increases at an exponential rate and typically, data becomes sparse in this high-dimensional space. This poses a significant challenge to machine learning methods which rely on exploiting structures underlying data to make meaningful inferences. This dissertation proposes the <i>n</i>-TARP method as a building block for high-dimensional data analysis, in both supervised and unsupervised scenarios.<div><br></div><div>The basic element, <i>n</i>-TARP, consists of a random projection framework to transform high-dimensional data to one-dimensional data in a manner that yields point separations in the projected space. The point separation can be tuned to reflect classes in supervised scenarios and clusters in unsupervised scenarios. The <i>n</i>-TARP method finds linear separations in high-dimensional data. This basic unit can be used repeatedly to find a variety of structures. It can be arranged in a hierarchical structure like a tree, which increases the model complexity, flexibility and discriminating power. Feature space extensions combined with <i>n</i>-TARP can also be used to investigate non-linear separations in high-dimensional data.<br></div><div><br></div><div>The application of <i>n</i>-TARP to both supervised and unsupervised problems is investigated in this dissertation. In the supervised scenario, a sequence of <i>n</i>-TARP based classifiers with increasing complexity is considered. The point separations are measured by classification metrics like accuracy, Gini impurity or entropy. The performance of these classifiers on image classification tasks is studied. This study provides an interesting insight into the working of classification methods. The sequence of <i>n</i>-TARP classifiers yields benchmark curves that put in context the accuracy and complexity of other classification methods for a given dataset. The benchmark curves are parameterized by classification error and computational cost to define a benchmarking plane. This framework splits this plane into regions of "positive-gain" and "negative-gain" which provide context for the performance and effectiveness of other classification methods. The asymptotes of benchmark curves are shown to be optimal (i.e. at Bayes Error) in some cases (Theorem 2.5.2).<br></div><div><br></div><div>In the unsupervised scenario, the <i>n</i>-TARP method highlights the existence of many different clustering structures in a dataset. However, not all structures present are statistically meaningful. This issue is amplified when the dataset is small, as random events may yield sample sets that exhibit separations that are not present in the distribution of the data. Thus, statistical validation is an important step in data analysis, especially in high-dimensions. However, in order to statistically validate results, often an exponentially increasing number of data samples are required as the dimensions increase. The proposed <i>n</i>-TARP method circumvents this challenge by evaluating statistical significance in the one-dimensional space of data projections. The <i>n</i>-TARP framework also results in several different statistically valid instances of point separation into clusters, as opposed to a unique "best" separation, which leads to a distribution of clusters induced by the random projection process.<br></div><div><br></div><div>The distributions of clusters resulting from <i>n</i>-TARP are studied. This dissertation focuses on small sample high-dimensional problems. A large number of distinct clusters are found, which are statistically validated. The distribution of clusters is studied as the dimensionality of the problem evolves through the extension of the feature space using monomial terms of increasing degree in the original features, which corresponds to investigating non-linear point separations in the projection space.<br></div><div><br></div><div>A statistical framework is introduced to detect patterns of dependence between the clusters formed with the features (predictors) and a chosen outcome (response) in the data that is not used by the clustering method. This framework is designed to detect the existence of a relationship between the predictors and response. This framework can also serve as an alternative cluster validation tool.<br></div><div><br></div><div>The concepts and methods developed in this dissertation are applied to a real world data analysis problem in Engineering Education. Specifically, engineering students' Habits of Mind are analyzed. The data at hand is qualitative, in the form of text, equations and figures. To use the <i>n</i>-TARP based analysis method, the source data must be transformed into quantitative data (vectors). This is done by modeling it as a random process based on the theoretical framework defined by a rubric. Since the number of students is small, this problem falls into the small sample high-dimensions scenario. The <i>n</i>-TARP clustering method is used to find groups within this data in a statistically valid manner. The resulting clusters are analyzed in the context of education to determine what is represented by the identified clusters. The dependence of student performance indicators like the course grade on the clusters formed with <i>n</i>-TARP are studied in the pattern dependence framework, and the observed effect is statistically validated. The data obtained suggests the presence of a large variety of different patterns of Habits of Mind among students, many of which are associated with significant grade differences. In particular, the course grade is found to be dependent on at least two Habits of Mind: "computation and estimation" and "values and attitudes."<br></div>
500

As contribui??es da comunica??o e do conhecimento da Ci?ncia da Informa??o para a an?lise de requisitos no desenvolvimento de software / The contributions of communication and knowledge of information science for requirements analysis at the software development

Pinto Filho, Antonio Tupinamb? Timbira de Oliveira 16 August 2005 (has links)
Made available in DSpace on 2016-04-04T18:36:41Z (GMT). No. of bitstreams: 1 Antonio Tupinamba Timbira de Oliveira Pinto Filho.pdf: 817776 bytes, checksum: cf7053d56060db0c2436ba88b00ace8f (MD5) Previous issue date: 2005-08-16 / This research intends to understand, in the definition system phase the valuable, prospective and formalization problems of information. The communication aspects are the focus, with their phases, between the main actors of the software development project in the requirement analysis step: analysts and users. A model of requirement analyses which considerer the communication and Information Science knowledge issues in the formalization of the system specifications are presented. This model helps to identify the important aspects, in the information user domain, for externalized and formalized to reach a requirement definition as near as possible of reality. / Esta pesquisa busca compreender, na etapa de defini??o do sistema, os problemas de valoriza??o, prospec??o e formaliza??o da informa??o, focando os aspectos da comunica??o, e suas diversas etapas, entre os atores principais de um projeto de desenvolvimento de software na fase de an?lise de requisitos: analista e usu?rio. Um modelo de an?lise de requisitos que considera as quest?es comunicacionais e do conhecimento da Ci?ncia da Informa??o, na formaliza??o das especifica??es do sistema ? apresentado. Este modelo ajuda a identificar os aspectos mais importantes, que est?o na esfera do dom?nio de informa??o do usu?rio, de modo a serem externalizados e formalizados para que se consiga chegar a uma defini??o de requisitos mais pr?xima da realidade.

Page generated in 0.1135 seconds