• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 199
  • 187
  • 118
  • 26
  • 15
  • 8
  • 7
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 622
  • 167
  • 161
  • 159
  • 135
  • 116
  • 98
  • 96
  • 94
  • 87
  • 82
  • 70
  • 63
  • 62
  • 58
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

Ontological approach for LOD-sensitive BIM-data management

Karlapudi, Janakiram, Valluru, Prathap, Menzel, Karsten 13 December 2021 (has links)
The construction industry is a collaborative environment with the involve-ment of multiple disciplines and activities throughout the Building Lifecycle Stages. The collaboration requires the iterative and coordinated exchange of information for significant improvement of the building design, construction and management. The successful representation of these information re-finements enables the identification of the required level of detail (LOD) for data sharing parameters between the multiple disciplines. Since the last dec-ade, LOD is a promising approach for efficient representation of semantically rich BIM data in different levels. Despite the improvement, there is a lack of efficient implementation in building lifecycle functionalities, because of their fundamental heterogeneity, versatility and adaptability. The proposed approach enables the representation of LOD-sensitive BIM data through the formal definition of ontologies. The paper validates this approach based on the concept of competency questions and their respective SPARQL queries. With the demonstration and validation, the paper provides the conceptual proof for the practical application of the developed approach. The proposed solution can also be easily adaptable and applicable to the present BIM pro-cess since the representation of BIM data in different ontologies (BOT, ifcOWL, etc.) are within reach.:1 Introduction and Background; 2 State-of-art-analysis 2.1 LOD systems 2.2 Information Management 3 Ontology-based LOD representation 3.1 LOD framework 3.2 Exemplary demonstration 4 BIM data management 4.1 LOD sensitive BIM data 4.2 LOD framework to processes 5 Framework validation 6 Conclusion and Future work 7 Acknowledgement 8 References
312

Contributions To Ontology-Driven Requirements Engineering

Siegemund, Katja 29 April 2014 (has links)
Today, it is well known that missing, incomplete or inconsistent requirements lead to faulty software designs, implementations and tests resulting in software of improper quality or safety risks. Thus, an improved Requirements Engineering contributes to safer and better-quality software, reduces the risk of overrun time and budgets and, most of all, decreases or even eliminates the risk for project failures. One significant problem requirements engineers have to cope with, are inconsistencies in the Software Requirements Specification. Such inconsistencies result from the acquisition, specification, and evolution of goals and requirements from multiple stakeholders and sources. In order to regain consistency, requirements information are removed from the specification which often leads to incompleteness. Due to this causal relationship between consistency, completeness and correctness, we can formally improve the correctness of requirements knowledge by increasing its completeness and consistency. Furthermore, the poor quality of individual requirements is a primary reason why so many projects continue to fail and needs to be considered in order to improve the Software Requirements Specification. These flaws in the Software Requirements Specification are hard to identify by current methods and thus, usually remain unrecognised. While the validation of requirements ensures that they are correct, complete, consistent and meet the customer and user intents, the requirements engineer is hardly supported by automated validation methods. In this thesis, a novel approach to automated validation and measurement of requirements knowledge is presented, which automatically identifies incomplete or inconsistent requirements and quality flaws. Furthermore, the requirements engineer is guided by providing knowledge specific suggestions on how to resolve them. For this purpose, a requirements metamodel, the Requirements Ontology, has been developed that provides the basis for the validation and measurement support. This requirements ontology is suited for Goal-oriented Requirements Engineering and allows for the conceptualisation of requirements knowledge, facilitated by ontologies. It provides a huge set of predefined requirements metadata, requirements artefacts and various relations among them. Thus, the Requirements Ontology enables the documentation of structured, reusable, unambiguous, traceable, complete and consistent requirements as demanded by the IEEE specification for Software Requirement Specifications. We demonstrate our approach with a prototypic implementation called OntoReq. OntoReq allows for the specification of requirements knowledge while keeping the ontology invisible to the requirements engineer and enables the validation of the knowledge captured within. The validation approach presented in this thesis is capable of being applied to any domain ontology. Therefore, we formulate various guidelines and use a continuous example to demonstrate the transfer to the domain of medical drugs. The Requirements Ontology as well as OntoReq have been evaluated by different methods. The Requirements Ontology has been shown to be capable for capturing requirements knowledge of a real Software Requirements Specification and OntoReq feasible to be used by a requirements engineering tool to highlight inconsistencies, incompleteness and quality flaws during real time requirements modelling.
313

Modelling User Tasks and Intentions for Service Discovery in Ubiquitous Computing

Ingmarsson, Magnus January 2007 (has links)
Ubiquitous computing (Ubicomp) increases in proliferation. Multiple and ever growing in numbers, computational devices are now at the users' disposal throughout the physical environment, while simultaneously being effectively invisible. Consequently, a significant challenge is service discovery. Services may for instance be physical, such as printing a document, or virtual, such as communicating information. The existing solutions, such as Bluetooth and UPnP, address part of the issue, specifically low-level physical interconnectivity. Still absent are solutions for high-level challenges, such as connecting users with appropriate services. In order to provide appropriate service offerings, service discovery in Ubicomp must take the users' context, tasks, goals, intentions, and available resources into consideration. It is possible to divide the high-level service-discovery issue into two parts; inadequate service models, and insufficient common-sense models of human activities. This thesis contributes to service discovery in Ubicomp, by arguing that in order to meet these high-level challenges, a new layer is required. Furthermore, the thesis presents a prototype implementation of this new service-discovery architecture and model. The architecture consists of hardware, ontology-layer, and common-sense-layer. This work addresses the ontology and common-sense layers. Subsequently, implementation is divided into two parts; Oden and Magubi. Oden addresses the issue of inadequate service models through a combination of service-ontologies in concert with logical reasoning engines, and Magubi addresses the issue of insufficient common-sense models of human activities, by using common sense models in combination with rule engines. The synthesis of these two stages enables the system to reason about services, devices, and user expectations, as well as to make suitable connections to satisfy the users' overall goal. Designing common-sense models and service ontologies for a Ubicomp environment is a non-trivial task. Despite this, we believe that if correctly done, it might be possible to reuse at least part of the knowledge in different situations. With the ability to reason about services and human activities it is possible to decide if, how, and where to present the services to the users. The solution is intended to off-load users in diverse Ubicomp environments as well as provide a more relevant service discovery. / <p>Report code: LiU-Tek-Lic-2007:14.</p>
314

Modeling Email Phishing Attacks

Almoqbil, Abdullah 12 1900 (has links)
Cheating, beguiling, and misleading information exist all around us; understanding deception and its consequences is crucial in our information environment. This study investigates deception in phishing emails that successfully bypassed Microsoft 365 filtering system. We devised a model that explains why some people are deceived and how targeted individuals and organizations can prevent or counter attacks. The theoretical framework used in this study is Anderson's functional ontology construction (FOC). The methodology involves quantitative and qualitative descriptive design, where the data source is the set of phishing emails archived from a Tier 1 University. We looked for term frequency-inverse document frequency (Tf-idf) and the distribution of words over documents (topic modeling) and found the subjects of phishing emails that targeted educational organizations are related to finances, jobs, and technologies. Also, our analysis shows the phishing emails in the dataset come under six categories; reward, urgency, curiosity, fear, job, and entertainment. Results indicate that staff and students were primarily targeted, and a list of the most used verbs for deception was compiled. We uncovered the stimuli being used by scammers and types of reinforcements used to misinform the target to ensure successful trapping via phishing emails. We identified how scammers pick their targets and how they tailor and systematically orchestrate individual attack on targets. The limitations of this study pertain to the sample size and the collection method. Future work will focus on implementing the derived model into software that can perform deception identification, target alerting and protection against advanced email phishing.
315

Model driven context awareness

Verdaguer, Sergi Laencina January 2007 (has links)
The very nature of mobile phones makes them ideal vehicles to study both individuals and organizations: people habitually carry a mobile phone with them and use it as a medium for much of their communication. The information available from today's phones includes the user's location, people nearby, and communication (call and SMS logs), as well as application usage and phone status (idle, charging, and so on). The main goal of this project is to combine some of the new technologies of voice over IP (VoIP) with context awareness services for mobile users and create a demonstrator for a typical routine of a student in Kista. We used context awareness together with the SIP Express Router to make a system more intelligent for the user. In this thesis the definition of CPL scripts and how they could exploit context information to provide SIP service that would be useful to a student were examined. A simple test was conducted to measure the overhead of using context awareness by the SIP proxy when processing CPL scripts. / Mobila telefoner gör dem ideala medel för att studera både individer och organisationar: personer bär ofta en mobil telefon med dem och använder den som ett medel för mycket av deras kommunikation. Informationen som är tillgänglig från dagens telefoner inkluderar användares läge, personer som är närliggande och kommunikation, såväl som applikationanvändning och telefon status. Målet av detta projekt är att kombinera som några av de nya teknologierna av röst över IP (VoIP) med kontextuppmärksamma servar för mobila användare och skapar en demonstrant för en typisk rutin av en studerande i Kista. Vi använde kontextuppmärksamma med SIP Express Router för att göra ett system mer intelligent för användare. I detta examensarbetet undersöker vi CPL skrifter och hur de skulle kunna exploatera kontext information för att ge den SIP tjänsten som är användbar till en studerande. Ett enkelt test förades för att mäta det över huvudet av att använda kontextuppmärksamma av den SIP proxyen när det arbetar med CPL skrifter.
316

Le suivi de l'apprenant dans le cadre du serious gaming / Learner monitoring in serious games

Thomas Benjamin, Pradeepa 10 April 2015 (has links)
Le " serious gaming " est une approche récente pour mener des activités " sérieuses " telles que communiquer, sensibiliser ou apprendre, en utilisant les techniques mises en œuvre dans les jeux vidéo. Les jeux sérieux sont devenus aujourd'hui un élément incontournable de la formation en ligne. C'est dans ce cadre du serious gaming pour la formation que se situe ce sujet de thèse. En effet, quelle que soit l'acception privilégiée, de nombreuses questions de recherche se posent. En particulier, comment peut-on évaluer les connaissances acquises par le joueur/apprenant à travers le jeu ? Nous sommes concentrés sur les jeux de type étude de cas utilisés notamment en gestion ou en médecine et proposons une méthode basée sur l'Evidence Centered Design pour concevoir le suivi de l'apprenant à des fins de diagnostic à destination de l'enseignant et de l'apprenant. Les actions liées aux études de cas sont très proches des actions métiers et recourent à des règles bien précises. Nous avons fait le choix de les représenter à l'aide de réseaux de Petri. Pour apporter de la sémantique à l'analyse par réseau de Petri, nous l'avons adossé à une ontologie du domaine et des actions de jeu. L'ontologie apporte une complémentarité non négligeable au réseau de Petri qui a une dimension purement procédurale. Nous combinons des réseaux de Petri et les ontologies afin de produire des indicateurs de performance pour cette catégorie particulière de jeux sérieux. L'étude des erreurs nous a conduits à proposer une taxinomie particulière pour les jeux sérieux en nous inspirant notamment des travaux réalisés dans le domaine de la sécurité. / "Serious gaming" is a recent approach, using the techniques implemented in video games, to conduct "serious" activities such as communication, awareness and learning. Serious games have now become an essential element of online training. This thesis takes place in the context of serious games for training. Indeed, whatever the preferred meaning, many research questions arise. In particular, how can we assess the knowledge acquired by the player / learner through the game? We have focused on case study type games used especially in management or medicine. We propose a method based on Evidence Centered Design to plan the monitoring of the learner for diagnostic purposes to the teacher and the learner. Actions in case studies are very close to business actions and resort to specific rules. We have chosen to represent them using Petri nets. To bring semantics to the Petri net analysis, we have added a domain and game action ontology. The ontology provides a significant complementarity to Petri net that has a purely procedural dimension. We have combined Petri nets and ontologies to produce performance indicators for this particular category of serious games. The study of errors led us to propose a particular taxonomy for serious games based on the work done in the field of security.
317

Memory Island : Visualizing Hierarchical Knowledge as Insightful Islands / Iles de mémoires : une nouvelle approche pour la visualisation intuitive des connaissances hiérarchiques

Yang, Bin 08 June 2015 (has links)
Dans cette thèse nous étudions une nouvelle approche de visualisation cartographique appelée « îles de mémoires ». Le terme « îles de mémoires » a été inspiré par la méthode des «loci» (pluriel de « locus » en latin qui signifie « endroit » ou « lieu») de l’ancien « Art de la mémoire». Une carte bien représentée dans l’esprit peut donner un sens à la connaissance, ce qui améliore une de recherche d'information (une recherche intuitive), et contribue à enrichir les connaissances issues de cette carte. Pour cela, la technique « îles de mémoires » consiste à associer chaque entité de connaissance à un endroit désigné sur une île virtuelle. Grâce aux les métaphores géographiques que nous avons définies, une représentation en « îles de mémoires » peut inférer des phénomènes souvent difficile à identifier et comprendre dans la connaissance. Dans une première partie, nous détaillons notre approche de visualisation d’une hiérarchie de connaissances en île de mémoire. Nous présentons les algorithmes que nous avons définis pour générer automatiquement une belle carte réaliste, fonctionnelle, intuitive et inspirante. Nous présentons aussi l’interface de visualisation "overview+detail" qui permet de naviguer dans les îles de mémoire. Dans une deuxième partie, nous détaillons les expérimentations réalisées avec notre outil dans le cadre du projet LOCUPLETO et des exemples issus du domaine des humanités numériques (Projet OBVIL, InPhO, etc.). Les résultats obtenus avec notre approche de visualisation sont prometteuses. En effet, les résultats démontrent que la navigation est intuitive et est capable d’augmenter la mémorisation des connaissances chez les utilisateurs de l’outil. Nous concluions notre thèse par le bilan des travaux menées et nous proposons un ensemble de travaux futurs basé sur notre approche de visualisation « îles de mémoires ». / This thesis is devoted to the study of an original cartographic visualization approach named Memory Island. We discuss how hierarchical knowledge can be meaningfully mapped and visualized as an insightful island. Our technique is inspired by the "loci" (plural of Latin "locus" for places or locations) method of the ancient "Art of Memory" technique. A well-designed map in mind can make sense of knowledge, which leads to the accomplishment of one's information seeking tasks, and helps to extend one's knowledge. To this end, Memory Island technique consists of associating each entity of knowledge to a designated area on a created virtual island. With the geographic visual metaphors we define, Memory Island can present phenomena found in knowledge, which is often difficult to understand. In this thesis, we discuss how we design our visualization technique to make it achieve the great features of visualization: automatically generate a truthful, functional, beautiful, insightful, and enlightening island with its technical details. In order to make Memory Island more convenient for its users, we present our "overview+detail" interface, to support them with visual exploration and knowledge analysis. We also demonstrate how to create knowledge maps using Memory Island technique, by giving some example on different datasets of Digital Humanities (Project OBVIL), e-books (Project LOCUPLETO) and other domains. Then, we propose our validation and evaluation protocols with two preliminary user experiments. The results from these studies indicate that the use of Memory Island provides advantages for non-experienced users tackling realistic browsing, helps them improve their performances in knowledge navigation and memorization tasks, and that most of them choose to use it for navigation and knowledge discovery. We end up by concluding our researches and listing some perspectives and future works that can be based on our Memory Island technique.
318

Retrieval and Labeling of Documents Using Ontologies: Aided by a Collaborative Filtering

Alshammari, Asma 06 June 2023 (has links)
No description available.
319

[en] A SERVICE FOR MATCHMAKING OF LOCATION-DEPENDENT INTERESTS / [pt] UM SERVIÇO DE MATCHMAKING DE INTERESSES DEPENDENTES DE LOCALIZAÇÃO

RODRIGO PRESTES MACHADO 03 January 2006 (has links)
[pt] Este trabalho apresenta um serviço de matchmaking (MMS) cujo objetivo é facilitar encontros entre pessoas que estejam geograficamente próximas e que tenham interesses similares. Para que pessoas se encontrem e eventualmente colaborem, o MMS analisa perfis de usuários portadores de dispositivos móveis, que estejam co-localizados e indica quais usuários possuem um maior grau de similaridade entre os seus perfis. Os perfis são descritos como ontologias no formato OWL (Web Ontology Language), onde assuntos de interesse podem ser relacionados a regiões simbólicas. As informações sobre localizações são obtidas por meio da interação do MMS com o serviço de inferência de localização (Location Inference Service - LIS), que faz parte da arquitetura MoCA (Mobile Collaboration Architecture). O serviço MMS tem uma arquitetura cliente/ Servidor. O servidor MMS provê o serviço de matchmaking em dois modos: o síncrono e o assíncrono. O primeiro modo permite que usuários façam consultas para encontrar pessoas com interesses similares na localização em que se encontram. O segundo modo permite que usuários sejam notificados sempre que apareça algum outro usuário na sua vizinhança que tenha interesses similares aos dele. O cliente MMS provê acesso ao serviço e permite a edição dos perfis de interesse específicos para cada localização. / [en] This work presents a matchmaking service (MMS) to enable meetings among co-localized people sharing similar interests. To make possible meetings and collaborations, the MMS analyses profiles of co- localized users using mobile devices, and indicates which users have a high degree of similarity among their profiles. The profiles are described using ontologies in OWL format (web ontology language), where the subjects of interest may be related with symbolic regions. The information about localization is obtained through interactions of MMS with a location service (Location Inference Service - LIS) present in MoCA architecture (Mobile Collaboration Architecture). The MMS service is built in a client/ Server architecture. The MMS provides the matchmaking service in two modes: synchronous and asynchronous. The synchronous mode allows users to request MMS to find people with similar interest in the same location. The asynchronous mode allows users to subscribe to the MMS service to receive automatic notifications when the MMS finds people in the neighborhood with similar interests. The MMS client provides access to the service and allows edition of users` profiles of interest for each localization.
320

Framework For Cost Modeling A Supply Chain

Yousef, Nabeel 01 January 2006 (has links)
Researchers are interested in value chain analysis to identify the different opportunities for cost savings. The literature have been narrow in scope and addressed specific problems; however none has addressed the need for a general framework that can be used as a standard template in the supply chain cost management and optimization, though Dekker and Goor (2000) said that the goal was to develop a model that would allow direct comparison of specific activities between firms, such as warehousing activities costs. There was no indication in the literature of a cost model that can identify all costs and cost drivers through the supply chain. Some firms built models to analyze the effect of changes in activities but only with limited activities such as logistics. The purpose of this research is to create a general framework that can express the cost data for the partners of the supply chain in similar terms. The framework will layout the common activities identified within the firm and the relationship of these activities between the partners of the supply chain, and the framework will identify the effect of changes in activities on other partners within the supply chain. Cost information will help in making decisions about pricing, outsourcing, capital expenditures, and operational efficiency. The framework will be able to track cost through the chain, which will improve the flexibility of the supply chain to respond to rapidly changing technology. The framework will help in developing product strategy paradigms that encompass the dynamics of the market, in particular with respect to the technology adoption lifecycle.

Page generated in 0.0292 seconds