• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 261
  • 98
  • 67
  • 42
  • 23
  • 19
  • 15
  • 13
  • 10
  • 6
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 625
  • 103
  • 96
  • 79
  • 68
  • 64
  • 57
  • 49
  • 47
  • 47
  • 46
  • 46
  • 43
  • 42
  • 39
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Hållbarhetsredovisning : En kvalitativ studie om begriplighet, användbarhet och relevans av en hållbarhetsrapport ur ett medarbetarperspektiv

Höök, Jennifer, Issak, Merna January 2019 (has links)
Title: Sustainability accounting - A qualitative study about the understanding, usefulness and relevance of sustainability reporting based on an employee perspective. Problematization: There has been previous research about sustainability reporting through an external point of view. There is however a gap in research about sustainability reporting from an employee perspective. Purpose: The purpose of this study is to develop a better comprehension of how employees perceive the information presented in sustainability report and if they feel that the report is aimed for them or external stakeholders. Frame of reference: This episode began with previous research about the three qualitative characteristics that are used when creating a financial report. The framework of GRI is also presented. Lastly the stakeholder theory and stakeholder dialogue are introduced. Method: This study has a qualitative approach where 10 semi-structured interviews have been held to collect data. Research: The employees have a better understanding of the information related to the social and environmental issues. Furthermore they only consider the information regarding the environment to be useful in their daily work. The employees believe that the environmental and social part of the report is relevant while the economic could not be assessed by the employees. Conclusions: Diagrams, thorough explanations and knowledge of the subject increased the understanding of the report. The economical part can be further understood if there is more extensive content. The information becomes useful when they can directly use it in their daily work and when they feel that the information is aimed for them. They need to receive the information through meetings for it to be useful for them. They had little previous knowledge of the economical part and were there for not able to assess the relevance of the report's content while the environmental and the social area were relevant because the content has currently been highlighted in newspapers and debates.
302

Introducing complex dependency structures into supervised components-based models / Structures de dépendance complexes pour modèles à composantes supervisées

Chauvet, Jocelyn 19 April 2019 (has links)
Une forte redondance des variables explicatives cause de gros problèmes d'identifiabilité et d'instabilité des coefficients dans les modèles de régression. Même lorsque l'estimation est possible, l'interprétation des résultats est donc extrêmement délicate. Il est alors indispensable de combiner à leur vraisemblance un critère supplémentaire qui régularise l'estimateur. Dans le sillage de la régression PLS, la stratégie de régularisation que nous considérons dans cette thèse est fondée sur l'extraction de composantes supervisées. Contraintes à l'orthogonalité entre elles, ces composantes doivent non seulement capturer l'information structurelle des variables explicatives, mais aussi prédire autant que possible les variables réponses, qui peuvent être de types divers (continues ou discrètes, quantitatives, ordinales ou nominales). La régression sur composantes supervisées a été développée pour les GLMs multivariés, mais n'a jusqu'alors concerné que des modèles à observations indépendantes.Or dans de nombreuses situations, les observations sont groupées. Nous proposons une extension de la méthode aux GLMMs multivariés, pour lesquels les corrélations intra-groupes sont modélisées au moyen d'effets aléatoires. À chaque étape de l'algorithme de Schall permettant l'estimation du GLMM, nous procédons à la régularisation du modèle par l'extraction de composantes maximisant un compromis entre qualité d'ajustement et pertinence structurelle. Comparé à la régularisation par pénalisation de type ridge ou LASSO, nous montrons sur données simulées que notre méthode non seulement permet de révéler les dimensions explicatives les plus importantes pour l'ensemble des réponses, mais fournit souvent une meilleure prédiction. La méthode est aussi évaluée sur données réelles.Nous développons enfin des méthodes de régularisation dans le contexte spécifique des données de panel (impliquant des mesures répétées sur différents individus aux mêmes dates). Deux effets aléatoires sont introduits : le premier modélise la dépendance des mesures relatives à un même individu, tandis que le second modélise un effet propre au temps (possédant donc une certaine inertie) partagé par tous les individus. Pour des réponses Gaussiennes, nous proposons d'abord un algorithme EM pour maximiser la vraisemblance du modèle pénalisée par la norme L2 des coefficients de régression. Puis nous proposons une alternative consistant à donner une prime aux directions les plus "fortes" de l'ensemble des prédicteurs. Une extension de ces approches est également proposée pour des données non-Gaussiennes, et des tests comparatifs sont effectués sur données Poissonniennes. / High redundancy of explanatory variables results in identification troubles and a severe lack of stability of regression model estimates. Even when estimation is possible, a consequence is the near-impossibility to interpret the results. It is then necessary to combine its likelihood with an extra-criterion regularising the estimates. In the wake of PLS regression, the regularising strategy considered in this thesis is based on extracting supervised components. Such orthogonal components must not only capture the structural information of the explanatory variables, but also predict as well as possible the response variables, which can be of various types (continuous or discrete, quantitative, ordinal or nominal). Regression on supervised components was developed for multivariate GLMs, but so far concerned models with independent observations.However, in many situations, the observations are grouped. We propose an extension of the method to multivariate GLMMs, in which within-group correlations are modelled with random effects. At each step of Schall's algorithm for GLMM estimation, we regularise the model by extracting components that maximise a trade-off between goodness-of-fit and structural relevance. Compared to penalty-based regularisation methods such as ridge or LASSO, we show on simulated data that our method not only reveals the important explanatory dimensions for all responses, but often gives a better prediction too. The method is also assessed on real data.We finally develop regularisation methods in the specific context of panel data (involving repeated measures on several individuals at the same time-points). Two random effects are introduced: the first one models the dependence of measures related to the same individual, while the second one models a time-specific effect (thus having a certain inertia) shared by all the individuals. For Gaussian responses, we first propose an EM algorithm to maximise the likelihood penalised by the L2-norm of the regression coefficients. Then, we propose an alternative which rather gives a bonus to the "strongest" directions in the explanatory subspace. An extension of these approaches is also proposed for non-Gaussian data, and comparative tests are carried out on Poisson data.
303

Improving the relevance of search results via search-term disambiguation and ontological filtering

Zhu, Dengya January 2007 (has links)
With the exponential growth of the Web and the inherent polysemy and synonymy problems of the natural languages, search engines are facing many challenges such as information overload, mismatch of search results, missing relevant documents, poorly organized search results, and mismatch of human mental model of clustering engines. To address these issues, much effort including employing different information retrieval (IR) models, information categorization/clustering, personalization, semantic Web, ontology-based IR, and so on, has been devoted to improve the relevance of search results. The major focus of this study is to dynamically re-organize Web search results under a socially constructed hierarchical knowledge structure, to facilitate information seekers to access and manipulate the retrieved search results, and consequently to improve the relevance of search results. / To achieve the above research goal, a special search-browser is developed, and its retrieval effectiveness is evaluated. The hierarchical structure of the Open Directory Project (ODP) is employed as the socially constructed knowledge structure which is represented by the Tree component of Java. Yahoo! Search Web Services API is utilized to obtain search results directly from Yahoo! search engine databases. The Lucene text search engine calculates similarities between each returned search result and the semantic characteristics of each category in the ODP; and thus to assign the search results to the corresponding ODP categories by Majority Voting algorithm. When an interesting category is selected by a user, only search results categorized under the category are presented to the user, and the quality of the search results is consequently improved. / Experiments demonstrate that the proposed approach of this research can improve the precision of Yahoo! search results at the 11 standard recall levels from an average 41.7 per cent to 65.2 per cent; the improvement is as high as 23.5 per cent. This conclusion is verified by comparing the improvements of the P@5 and P@10 of Yahoo! search results and the categorized search results of the special search-browser. The improvement of P@5 and P@10 are 38.3 per cent (85 per cent - 46.7 per cent) and 28 per cent (70 per cent - 42 per cent) respectively. The experiment of this research is well designed and controlled. To minimize the subjectiveness of relevance judgments, in this research five judges (experts) are asked to make their relevance judgments independently, and the final relevance judgment is a combination of the five judges’ judgments. The judges are presented with only search-terms, information needs, and the 50 search results of Yahoo! Search Web Service API. They are asked to make relevance judgments based on the information provided above, there is no categorization information provided. / The first contribution of this research is to use an extracted category-document to represent the semantic characteristics of each of the ODP categories. A category-document is composed of the topic of the category, description of the category, the titles and the brief descriptions of the submitted Web pages under this category. Experimental results demonstrate the category-documents of the ODP can represent the semantic characteristics of the ODP in most cases. Furthermore, for machine learning algorithms, the extracted category-documents can be utilized as training data which otherwise demand much human labor to create to ensure the learning algorithm to be properly trained. The second contribution of this research is the suggestion of the new concepts of relevance judgment convergent degree and relevance judgment divergent degree that are used to measure how well different judges agree with each other when they are asked to judge the relevance of a list of search results. When the relevance judgment convergent degree of a search-term is high, an IR algorithm should obtain a higher precision as well. On the other hand, if the relevance judgment convergent degree is low, or the relevance judgment divergent degree is high, it is arguable to use the data to evaluate the IR algorithm. This intuition is manifested by the experiment of this research. The last contribution of this research is that the developed search-browser is the first IR system (IRS) to utilize the ODP hierarchical structure to categorize and filter search results, to the best of my knowledge.
304

財務資訊與無形資產密集企業價值攸關性之探討 / On the value-relevance of financial information in intangible-intensive industries

林郁昕, Lin, Yu-Hsin Unknown Date (has links)
本研究探討在智慧資本觀念倍受重視之際,傳統財務資訊與企業之價值攸關性是否因此受到影響,並進一步探究不同因素是否會影響財務資訊的價值攸關性。 本研究以Collins, Maydew, and Weiss(1997)及Lev and Zarowin(1999)為基礎,分析每股盈餘、每股淨值對股價與股票報酬率之價值攸關性變動情形,首先以橫斷面分析民國79年至88年間上市公司之財務資訊解釋能力,再以時間序列分析探討影響財務資訊解釋能力之因素,並進一步分析影響無形資產密集產業與傳統產業財務資訊價值攸關性不同之因素。 研究結果發現,每股盈餘及每股淨值之價值攸關性並未減少,且有上升之趨勢,而每股淨值之增額價值攸關性亦上升。依產業性質、盈餘品質、盈餘正負區分樣本之實證結果顯示,無形資產密集產業、有常續性項目之樣本以及常續性盈餘為正之樣本的價值攸關性較高。時間可以釋釋價值攸關性之變動,但與其他因素合併考量時,則不具有解釋能力。無形資產密集產業與傳統產業財務資訊價值攸關性主要受到時間因素影響,研究發展費用(創新之代理變數)、員工生產力(人力資源之代理變數)及存貨週轉率(結構資本之代理變數)等因素無法解釋產業價值攸關性之變化。 / This thesis investigates whether traditional financial information, such are earnings, book value of equity, and cash flow information, has lost its value relevance while the concept of intellectual capital rises. Furthermore, the thesis examines what factors explain the value relevance of financial information. Based on the study of Collins, Maydew, and Weiss (1997) and Lev and Zarowin (1999), this thesis first analyzes the value relevance of earnings, book values of equity, and operating cash flow over time, followed by exploring possible explanations for the observed temporal shift in explanatory power. In addition, this thesis analyzes the factors affect the different value relevance of financial information between intangible intensive industries and tradition industries. The empirical results indicate that the value relevance of earnings and book value of equity do not diminish, instead, the value relevance of earnings and book value of equity appears to have increased slightly over time. And also, the incremental explanatory power of book value of equity has increased over the sample period. For samples from the intangible intensive industries, the value relevance of information for firms with one-time items, and firms with negative core earnings are higher. With Time as the sole explanatory variable, time does explain the changes in the value relevance of financial information. However, time factor loses its explanatory power when incorporates other factors into the model. This study finds that research and development intensity (proxy for innovation), employee’s productivity (proxy for human resources), and inventory turnover (proxy for structural capital) do not help explain the shift in value relevance of financial information.
305

Evaluation of Effective XML Information Retrieval

Pehcevski, Jovan, jovanp@cs.rmit.edu.au January 2007 (has links)
XML is being adopted as a common storage format in scientific data repositories, digital libraries, and on the World Wide Web. Accordingly, there is a need for content-oriented XML retrieval systems that can efficiently and effectively store, search and retrieve information from XML document collections. Unlike traditional information retrieval systems where whole documents are usually indexed and retrieved as information units, XML retrieval systems typically index and retrieve document components of varying granularity. To evaluate the effectiveness of such systems, test collections where relevance assessments are provided according to an XML-specific definition of relevance are necessary. Such test collections have been built during four rounds of the INitiative for the Evaluation of XML Retrieval (INEX). There are many different approaches to XML retrieval; most approaches either extend full-text information retrieval systems to handle XML retrieval, or use database technologies that incorporate existing XML standards to handle both XML presentation and retrieval. We present a hybrid approach to XML retrieval that combines text information retrieval features with XML-specific features found in a native XML database. Results from our experiments on the INEX 2003 and 2004 test collections demonstrate the usefulness of applying our hybrid approach to different XML retrieval tasks. A realistic definition of relevance is necessary for meaningful comparison of alternative XML retrieval approaches. The three relevance definitions used by INEX since 2002 comprise two relevance dimensions, each based on topical relevance. We perform an extensive analysis of the two INEX 2004 and 2005 relevance definitions, and show that assessors and users find them difficult to understand. We propose a new definition of relevance for XML retrieval, and demonstrate that a relevance scale based on this definition is useful for XML retrieval experiments. Finding the appropriate approach to evaluate XML retrieval effectiveness is the subject of ongoing debate within the XML information retrieval research community. We present an overview of the evaluation methodologies implemented in the current INEX metrics, which reveals that the metrics follow different assumptions and measure different XML retrieval behaviours. We propose a new evaluation metric for XML retrieval and conduct an extensive analysis of the retrieval performance of simulated runs to show what is measured. We compare the evaluation behaviour obtained with the new metric to the behaviours obtained with two of the official INEX 2005 metrics, and demonstrate that the new metric can be used to reliably evaluate XML retrieval effectiveness. To analyse the effectiveness of XML retrieval in different application scenarios, we use evaluation measures in our new metric to investigate the behaviour of XML retrieval approaches under the following two scenarios: the ad-hoc retrieval scenario, exploring the activities carried out as part of the INEX 2005 Ad-hoc track; and the multimedia retrieval scenario, exploring the activities carried out as part of the INEX 2005 Multimedia track. For both application scenarios we show that, although different values for retrieval parameters are needed to achieve the optimal performance, the desired textual or multimedia information can be effectively located using a combination of XML retrieval approaches.
306

Modélisation de la pertinence en recherche d'information : modèle conceptuel, formalisation et application

Denos, Nathalie 28 October 1997 (has links) (PDF)
Les systèmes de recherche d'information ont pour fonction de permettre à l'utilisateur d'accéder à des documents qui contribuent à résoudre le problème d'information qui motive sa recherche. Ainsi le système peut être vu comme un instrument de prédiction de la pertinence des documents du corpus pour l'utilisateur. Les indices traditionnellement utilisés par le système pour estimer cette pertinence sont de nature thématique, et sont fournis par l'utilisateur sous la forme d'un ensemble de mots-clés : la requête. Le système implémente donc une fonction de correspondance entre documents et requête qui modélise la dimension thématique de la pertinence. Cependant l'éventail des utilisations et des utilisateurs des systèmes va s'élargissant, de même que la nature des documents présents dans les corpus, qui ne sont plus seulement des documents textuels. Nous tirons deux conséquences de cette évolution. D'une part, l'hypothèse que le facteur thématique de pertinence est prépondérant (et donc seul sujet à modélisation dans les systèmes), ne tient plus. Les autres facteurs, nombreux, de la pertinence interviennent d'une manière telle qu'ils compromettent les performances des systèmes dans le contexte d'une utilisation réelle. Ces autres facteurs dépendent fortement de l'individu et de sa situation de recherche d'information, ce qui remet en cause la conception de la pertinence système comme une fonction de correspondance qui ne prend en compte que les facteurs de la pertinence qui ne dépendent pas de l'utilisateur. D'autre part, la nature de l'utilisation interactive du système contribue à définir la situation de recherche de l'utilisateur, et en cela participe aux performances du système de recherche d'information. Un certain nombre de caractéristiques de l'interaction sont directement liées à la modélisation de la pertinence système et à des préoccupations spécifiques à la problématique de la recherche d'information. Notre thèse s'appuie sur les travaux réalisés sur les facteurs de la pertinence pour un individu, pour définir un modèle de conception de la pertinence système qui prend en compte les facteurs qui relèvent de l'utilisation interactive du système et de la nécessité d'adaptation de la fonction de correspondance à la situation de recherche particulière dans laquelle l'utilisateur se trouve. Ainsi, nous définissons trois nouvelles fonctions du système de recherche d'information, en termes d'utilisation du système : permettre la détection de la pertinence des documents retrouvés, permettre la compréhension des raisons de leur pertinence système, et permettre de procéder à une reformulation du problème d'information dans le cadre d'un processus itératif de recherche. La notion de schéma de pertinence se substitue à celle de requête, en tant qu'interface entre la pertinence système et l'utilisateur. Ce schéma de pertinence intègre deux types de paramètres permettant l'adaptation du système à la situation de recherche : d'une part les paramètres sémantiques, qui recouvrent non seulement la dimension thématique de la pertinence mais aussi d'autres critères de pertinence liés aux caractéristiques indexées des documents, et d'autre part les paramètres pragmatiques qui prennent en compte les facteurs de la pertinence liés aux conditions dans lesquelles l'utilisateur réalise les tâches qui lui incombent dans l'interaction. Nous appliquons ce modèle de conception de la pertinence système dans le cadre d'une application de recherche d'images, dont le corpus est indexé de façon à couvrir plusieurs dimensions de la pertinence outre la dimension thématique. Notre prototype nous permet de montrer comment le système s'adapte en fonction des situations qui se présentent au cours d'une session de recherche.
307

Har aktuell utveckling inom anknytningsteorin relevans för socialt arbete? / Does recent development within attachment theory have relevance for social work?

Alamaa, Helena, Bluhme, Magdalena January 2010 (has links)
No description available.
308

DLuftO – ett stöd för insatsdivisionen? / DLuftO – Supporting a combined operation?

Nyström, Henrik January 2010 (has links)
<p>Försvarsmakten har skapat en doktrinhierarki där det redovisas hur Försvarsmakten konceptuellt skall genomföra insatser. Doktrin för luftoperationer är den del av doktrinen som specifikt riktar sig mot flygstridskrafter. I dagens insatsförsvar är tänkbara insatser väsensskilda från invasionsförsvarets insatser, och för detta har en ny typ av förband skapats, insatsförbandet. Flygstridskrafterna bidrar med bland annat Stridsflygenhet (SE), en insatsdivision utrustad med JAS 39. I denna uppsats avser jag undersöka huruvida doktrinen erbjuder stöd, konceptuellt eller praktiskt, till ledningen för en enhet av det ovan nämnda slaget. Jag har genom litteraturstudie tagit fram på vilket sätt DLuftO anger att flygstridskrafter skall nyttjas. Jag har intervjuat ledningen på SE 01 och SE 02 för att undersöka doktrinens roll i verksamheten. Därefter har jag genomfört en komparativ analys mellan verkligheten och doktrinen. Genom analysen har jag kunnat dra slutsatsen att DLuftO till begränsad del erbjuder stöd för divisionsledning i det moderna insatsförsvaret.</p> / <p>The Swedish armed forces has created a doctrine which, on a conceptual level, regulates how the Armed Forces carry out operations. “Doctrine for air operations” (DLuftO) is Air Force specific and forms apart of the overall doctrine of the Swedish Armed Forces. In the Swedish Air Force the focus has shifted from national defence to international operations. This has created new tasks and demands, which has led to the creation of a new type of unit. This unit will have to be prepared to meet the demands of a combined mission in international operations. One of these units is the SE unit (SE =Fighter Unit), a JAS 39 squadron with air-to-air, air-to-ground and reconnaissance capabilities. In this essay, I intend to investigate whether the doctrine offers support, conceptually or practically, to the commanding officers of a fighter unit of this kind. I have investigated in what way DLuftO dictates how air power is intended to be used. I have interviewed commanding officers on SE 01 and SE 02 to investigate how the doctrine is used. This has then formed the basis for a comparative analysis between reality and the doctrine. I have been able to conclude that DLuftO offers a limited amount of support for commanding officers on a fighter squadron in a combined military operation.</p>
309

DLuftO – ett stöd för insatsdivisionen? / DLuftO – Supporting a combined operation?

Nyström, Henrik January 2010 (has links)
Försvarsmakten har skapat en doktrinhierarki där det redovisas hur Försvarsmakten konceptuellt skall genomföra insatser. Doktrin för luftoperationer är den del av doktrinen som specifikt riktar sig mot flygstridskrafter. I dagens insatsförsvar är tänkbara insatser väsensskilda från invasionsförsvarets insatser, och för detta har en ny typ av förband skapats, insatsförbandet. Flygstridskrafterna bidrar med bland annat Stridsflygenhet (SE), en insatsdivision utrustad med JAS 39. I denna uppsats avser jag undersöka huruvida doktrinen erbjuder stöd, konceptuellt eller praktiskt, till ledningen för en enhet av det ovan nämnda slaget. Jag har genom litteraturstudie tagit fram på vilket sätt DLuftO anger att flygstridskrafter skall nyttjas. Jag har intervjuat ledningen på SE 01 och SE 02 för att undersöka doktrinens roll i verksamheten. Därefter har jag genomfört en komparativ analys mellan verkligheten och doktrinen. Genom analysen har jag kunnat dra slutsatsen att DLuftO till begränsad del erbjuder stöd för divisionsledning i det moderna insatsförsvaret. / The Swedish armed forces has created a doctrine which, on a conceptual level, regulates how the Armed Forces carry out operations. “Doctrine for air operations” (DLuftO) is Air Force specific and forms apart of the overall doctrine of the Swedish Armed Forces. In the Swedish Air Force the focus has shifted from national defence to international operations. This has created new tasks and demands, which has led to the creation of a new type of unit. This unit will have to be prepared to meet the demands of a combined mission in international operations. One of these units is the SE unit (SE =Fighter Unit), a JAS 39 squadron with air-to-air, air-to-ground and reconnaissance capabilities. In this essay, I intend to investigate whether the doctrine offers support, conceptually or practically, to the commanding officers of a fighter unit of this kind. I have investigated in what way DLuftO dictates how air power is intended to be used. I have interviewed commanding officers on SE 01 and SE 02 to investigate how the doctrine is used. This has then formed the basis for a comparative analysis between reality and the doctrine. I have been able to conclude that DLuftO offers a limited amount of support for commanding officers on a fighter squadron in a combined military operation.
310

Designing Sociable Technologies

Barraquand, Remi 02 February 2012 (has links) (PDF)
This thesis investigates the design of sociable technologies and is divided into three main parts described below. In the first part, we introduce sociable technologies. We review our the definition of technology and propose categories of technologies according to the motivation underlying their design: improvement of control, improvement of communication or improvement of cooperation. Sociable technologies are then presented as an extension of techniques to improve cooperation. The design of sociable technologies are then discussed leading to the observation that acquisition of social common sense is a key challenge for designing sociable technologies. Finally, polite technologies are presented as an approach for acquiring social common sense. In the second part, we focus on the premises for the design of sociable technologies. A key aspect of social common sense is the ability to act appropriately in social situations. Associating appropriate behaviour with social situations is presented as a key method for implementing polite technologies. Reinforcement learning is proposed as a method for learning such associations and variation of this algorithm are experimentally evaluated. Learning the association between situation and behaviour relies on the strong assumption that mutual understanding of social situations can be achieved between technologies and people during interaction. We argue that in order to design sociable technologies, we must change the model of communication used by our technologies. We propose to replace the well-known code model of communication, with the ostensive-inferential model proposed by Sperber and Wilson. Hypotheses raised by this approach are evaluated in an experiment conducted in a smart environment, where, subjects by group of two or three are asked to collaborate with a smart environment in order to teach it how to behave in an automated meeting. A novel experimental methodology is presented: The Sorceress of Oz. The results collected from this experiment validate our hypothesis and provide insightful information for the design. We conclude by presenting, what we believe are, the premises for the design of sociable technologies. The final part of the thesis concerns an infrastructure for the design of sociable technologies. This infrastructure provides the support for three fundamental components. First, it provides the support for an inferential model of context. This inferential model of context is presented; a software architecture is proposed and evaluated in an experiment conducted in a smart-environment. Second, it provides the support for reasoning by analogy and introduces the concept of eigensituations. The advantage of this representation are discussed and evaluated in an experiment. Finally, it provides the support for ostensive-inferential communication and introduces the concept of ostensive interface.

Page generated in 0.0487 seconds