• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1079
  • 504
  • 14
  • 3
  • 2
  • 2
  • Tagged with
  • 1604
  • 799
  • 691
  • 497
  • 416
  • 346
  • 276
  • 210
  • 192
  • 179
  • 149
  • 138
  • 121
  • 111
  • 109
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

A Nomadicity-driven Negotiation Protocol, Tactics and Strategies for Interacting Software Agents.

Sameh, Abdel-Naby January 2010 (has links)
The rising integration of pocket computing devices in our daily life duties has taken the attention of researchers from different scientific backgrounds. Today's amount of software applications that bring together advanced mobile services and literature of Artificial Intelligence (AI) is quite remarkable and worth investigating. Cooperation, coordination and negotiation are some of AI's focal points wherein many of its related research efforts are strengthening the join between sophisticated research outcomes and modern life requirements, such as serviceability on the move. In Distributed Artificial Intelligence (DAI), several of the research conducted in Multi-Agent Systems (MASs) addresses the mutually beneficial agreements that a group of interacting autonomous agents are expected to reach. In our research, we look at agents as the transportable software packets that each represents a set of needs a user of a pocket computing device demands from a remote service acquisition platform. However, when a set of software agents attempt to reach an agreement, a certain level of cooperation must be reached first, then, a negotiation process is carried out. Depending on each agent's negotiation skills and considerations, the returns of each accomplished agreement can either be maximized or minimized. In this thesis, we introduce a new negotiation model, (i.e., protocol, set of tactics, strategy), for software agents to employ while attempting to acquire a service on behalf of users of pocket computing devices. The purpose of our model is to maximize the benefits of the interacting agents while considering the limitations of the communication technologies involved and, the nomadic nature of the users they represent. We show how our model can be generically implemented. Then, we introduce two case-studies that we have been working on with our industrial partner and, we demonstrate these cases' experimental results before and after applying our negotiation model.
212

Linguistically Motivated Reordering Modeling for Phrase-Based Statistical Machine Translation

Bisazza, Arianna January 2013 (has links)
Word reordering is one of the most difficult aspects of Statistical Machine Translation (SMT), and an important factor of its quality and efficiency. While short and medium-range reordering is reasonably handled by the phrase-based approach (PSMT), long-range reordering still represents a challenge for state-of-the-art PSMT systems. As a major cause of this problem, we point out the inadequacy of existing reordering constraints and models to cope with the reordering phenomena occurring between distant languages. On one hand, the reordering constraints used to control translation complexity appear to be too coarse-grained. On the other hand, the reordering models used to score different reordering decisions during translation are not discriminative enough to effectively guide the search over very large sets of hypotheses. In this thesis we propose several techniques to improve the definition of the reordering search space in PSMT by exploiting prior linguistic knowledge, so that long-range reordering may be adequately handled without sacrificing efficiency. In particular, we focus on Arabic-English and German-English: two language pairs characterized by uneven distributions of reordering phenomena, with long-range movements concentrating on few patterns. All our techniques aim at improving the definition of the reordering search space by exploiting prior linguistic knowledge, but they do this with different means: namely, chunk-based reordering rules and word reordering lattices, modified distortion matrices and early reordering pruning. Through extensive experiments, we show that our techniques can significantly advance the state of the art in PSMT for these challenging language pairs. When compared with a popoular tree-based SMT approach, our best PSMT systems achieve comparable or higher reordering accuracies while being considerably faster.
213

Provenance in Open Data Entity-Centric Aggregation

Abdelhamid Abdelnaby, Moaz January 2015 (has links)
An increasing number of web services these days require combining data from several data providers into an aggregated database. Usually this aggregation is based on the linked data approach. On the other hand, the entity-centric model is a promising data model that outperforms the linked data approach because it solves the lack of explicit semantics and the semantic heterogeneity problems. However, current open data which is available on the web as raw datasets can not be used in the entity-centric model before processing them with an import process to extract the data elements and insert them correctly in the aggregated entity-centric database. It is essential to certify the quality of these imported data elements, especially the background knowledge part which acts as input to semantic computations, because the quality of this part affects directly the quality of the web services which are built on top of it. Furthermore, the aggregation of entities and their attribute values from different sources raises three problems: the need to trace the source of each element, the need to trace the links between entities which can be considered equivalent and the need to handle possible conflicts between different values when they are imported from various data sources. In this thesis, we introduce a new model to certify the quality of a back ground knowledge base which separates linguistic and language independent elements. We also present a pipeline to import entities from open data repositories to add the missing implicit semantics and to eliminate the semantic heterogeneity. Finally, we show how to trace the source of attribute values coming from different data providers; how to choose a strategy for handling possible conflicts between these values; and how to keep the links between identical entities which represent the same real world entity.
214

Application Interference in Multi-Core Architectures: Analysis and Effects

Kandalintsev, Alexandre January 2016 (has links)
Clouds are an irreplaceable part of many business applications. They provide tremendous flexibility and gave birth for many related technologies – Software as a Service (SaaS) and the like. One of the biggest powers of clouds is load redistribution for scaling up and down on demand. This helps dealing with varying loads, increasing resource utilization and cutting down electricity bills while maintaining reasonable performance isolation. The last one is of our particular interest. Most cloud systems are accounted and billed not by useful throughput, but by resource usage. For example, a cloud provider may charge according to cumulative CPU time and/or average memory footprint. But this does not guarantee that the application realized its full performance potential because CPU and memory are shared resources. As a result, if there are many other applications it could experience frequent execution stalls due to contention on memory bus or cache pressure. The problem is more and more pronounced because modern hardware rapidly increases in density leading to more applications are co-located. The performance degradation caused by co-location of applications is called application interference. In this work we study in-depth reasons of interference as well as ways to mitigate it. The first part of the work is devoted to interference analysis and introduces a simple yet powerful empirical model of CPU performance that takes interference into account. The model is based on empirical observations and build up from extrapolation of a two-task (trivial) case. In the following part we present a method of ranking of virtual machines according to their average interference. The method is based on analysis of performance counters. We first launch a set of very diverse benchmark programs (to be representative for wide range of programs) one-by-one together with all sorts of performance counters. This gives us their “ideal” (isolated) performances. Then we run them in pairs to see the level of interference they create to each other. Once this is done, for each benchmark we calculate average interference. Finally we calculate the correlation between the average interference and performance counters. The counters with the biggest correlation are to be used as interference estimators. The final part deals with measuring interference in production environment with affordable overhead. The technique is based on short (in the order of milliseconds) freezes of virtual machines to see how they affect other VMs (hence the name of method – Freeze’nSense). By comparing the performance of the VM when other VMs active and when they frozen it is possible to conclude how much it looses in speed because of sharing hardware with other applications.
215

Security Risk Assessment Methods: An Evaluation Framework and Theoretical Model of the Criteria Behind Methods' Success

Labunets, Katsiaryna January 2016 (has links)
Over the past decades a significant number of methods to identify and mitigate security risks have been proposed, but there are few empirical evaluations that show whether these methods are actually effective. So how can practitioners decide which method is the best for security risk assessment of their projects? To this end, we propose an evaluation framework to compare security risk assessment methods that evaluates the quality of results of methods application with help of external industrial experts and can identify aspects having an effect on the successful application of these methods. The results of the framework application helped us to build the model of key aspects that impact the success of a security risk assessment. Among these aspects are i) the use of catalogues of threats and security controls which can impact methods' actual effectiveness and perceived usefulness and ii) the use of visual representation of risk models that can positively impact methods' perceived ease of use, but negatively affect methods' perceived usefulness if the visual representation is not comprehensible due to scalability issues. To further investigate these findings, we conducted additional empirical investigations: i) how different features of the catalogues of threats and security controls contribute into an effective risk assessment process for novices and experts in either domain or security knowledge, and ii) how comprehensible are different representation approaches for risk models (e.g. tabular and graphical).
216

The project-based method for a competence-based approach: teaching computer science in Italian secondary schools

Giaffredo, Silvio January 2018 (has links)
The competence-based approach to education has been found to be effective for teaching. Some countries have adopted it and subsequently reshaped the school system accordingly. Introduced in Italian secondary schools in 2010 by the Ministry for Education, the competence-based approach has only been partially adopted in the classes. Our research aims at discovering solutions to support the adoption of this approach for teachers in Italian secondary schools. A two steps approach has been investigated: 1) inclusion of teachers in the process of competence definition, and 2) support of teachers activity during student projects. The study focuses on computer science teachers, who often teach using the student projects. A software system for a participated definition of competences has been set up. A training course has been designed and implemented. Some student projects have been studied through teacher and student observation in the classroom and in laboratory. The results indicate a weak commitment on the part of teachers towards the competence-based approach. At the same time, exploiting students projects towards the project-based learning method encourages teachers to adopt a competence-based approach, provided the projects are carefully designed and effectively managed.
217

From states to objects and events through stratification: A formal account and an application to data analytics

Botti Benevides, Alessander January 2015 (has links)
In this thesis, we are mainly interested in representing and stratifying objects, states, and events. From an epistemological or perceptual perspective, states may be seen as "time stamped data" or "basic observations". Perception and cognition organize sensory outputs by grouping them in units that allow us to interact with the world in a quick and fruitful way. Similarly, states can be organized by synchronically grouping them into complex configurations of the world, or diachronically grouping them into events. We are specially interested in the logical forms of existential dependencies between states, objects and events. In the view of some philosophers, the world is stratified in levels by means of existential dependencies. This view is also present in theories of granularity, where (i) an abstraction mechanism allows for the simplification of fine-grained (lower level) entities into coarser (higher level) entities, stratifying entities in levels of abstraction; or (ii) data resolution is considered a criterion for stratifying data in levels of granularity, reflecting a more epistemological perspective on levels. We present here a framework for representing and stratifying objects, states, and events in levels. We show that our theory supports different notions of level, and suggest applications to data analytics.
218

Event Based Media Indexing

Tankoyeu, Ivan January 2013 (has links)
Multimedia data, being multidimensional by its nature, requires appropriate approaches for its organizing and sorting. The growing number of sensors for capturing the environmental conditions in the moment of media creation enriches data with context-awareness. This unveils enormous potential for eventcentred multimedia processing paradigm. The essence of this paradigm lies in using events as the primary means for multimedia integration, indexing and management. Events have the ability to semantically encode relationships of different informational modalities. These modalities can include, but are not limited to: time, space, involved agents and objects. As a consequence, media processing based on events facilitates information perception by humans. This, in turn, decreases the individual’s effort for annotation and organization processes. Moreover events can be used for reconstruction of missing data and for information enrichment. The spatio-temporal component of events is a key to contextual analysis. A variety of techniques have recently been presented to leverage contextual information for event-based analysis in multimedia. The content-based approach has demonstrated its weakness in the field of event analysis, especially for the event detection task. However content-based media analysis is important for object detection and recognition and can therefore play a role which is complementary to that of event-driven context recognition. The main contribution of the thesis lies in the investigation of a new eventbased paradigm for multimedia integration, indexing and management. In this dissertation we propose i) a novel model for event based multimedia representation, ii) a robust approach for mining events from multimedia and iii) exploitation of detected events for data reconstruction and knowledge enrichment.
219

Exploiting Business Process Knowledge for Process Improvement

Rodriguez, Carlos January 2013 (has links)
Processes are omnipresent in humans’ everyday activities: withdrawals from an ATM, loan requests from a bank, renewals of driver’s licenses, purchases of goods from online retail systems. In particular, the business domain has strongly embraced processes as an instrument to help in the organization of business operations, leading to so-called business processes. A business process is a set of logically-related tasks performed to achieve a defined business outcome. Business processes have a big impact on the achievement of business goals and they are widely acknowledged as one of the more important assets of any organization next to the organization’s customer basis and, more recently, data. Thus, there is a high interest in keeping business processes performing at their best and improving those that do not perform well. Nowadays, business processes are supported by a wide range of enabling technologies, including Web services and business process engines, which enable the (partial)automation of processes. Information systems supporting the execution of processes typically store a wealth of process knowledge that includes process models, process progression information and business data. The availability of such process knowledge gives unprecedented opportunities to get insight into business processes, which leads to the question of how to exploit this knowledge for facilitating the improvement of processes. In order to answer this question, we propose to exploit process knowledge from two different but complementary perspectives. In the first one, we take the process execution perspective and leverage on process execution data generated by information systems to analyze and understand the actual behavior of executed processes. In the second one, we take the process design perspective and propose to extract process model patterns from existing models for reuse in the design of processes. The final goal of this thesis is to facilitate process improvement by exploiting existing process knowledge not only for gaining insight into and understanding of processes but also for reusing the resulting knowledge in the improvement thereof. We have successfully applied our approaches in the context of service-based business processes and assisted dataflow-based mashup development. In the former, we validated our approach through a end-user study of the usability and understandability of our approach and tools, while in the latter the evaluations were performed through experiments run on a dataset of models from the mashup tool Yahoo! Pipes.
220

Supporting Concept Extraction and Identifier Quality Improvement through Programmers' Lexicon Analysis

Abebe, Surafel Lemma January 2013 (has links)
Identifiers play an important role in communicating the intentions associated with the program entities they represent. The information captured in identifiers support programmers to (re-)build the â mental modelâ of the software and facilitates understanding. (Re-)building the â mental modelâ and understanding large software, however, is difficult and expensive. Besides, the effort involved in the process heavily depends on the quality of the programmersâ lexicon used to construct the identifiers. This thesis addresses the problem of program understanding focusing on (i) concept extraction, and (ii) quality of the lexicon used in identifiers. To address the first problem (concept extraction), two ontology extraction approaches exploiting the natural language information captured in identifiers and structural information of the source code are proposed and evaluated. We have also proposed a method to automatically train a natural language analyzer for identifiers. The trained analyzer is used for concept extraction. The evaluation was conducted on a program understanding task, concept location. Results show that the extracted concepts increase the effectiveness of concept location queries. Besides extracting concepts from the source code, we have investigated information retrieval (IR) based techniques to filter domain concepts from implementation concepts. To address the second problem (quality of the lexicon used in identifiers), we have defined a publicly available catalog of lexicon bad smells (LBS) and developed a suite of tools to automatically detect them. LBS indicate some potential lexicon construction problems that can be addressed through refactoring. The impact of LBS on concept location and the contribution they can give to fault prediction have been studied empirically. Results indicate that LBS refactoring has a significant positive impact on IR-based concept location task and contributes to improve fault prediction, when used in conjunction with structural metrics. In addition to detecting LBS in identifiers, we try also to fix them. We have proposed an approach which uses the concepts extracted from the source code to suggest names which can be used to complete or replace an identifier. The evaluation of the approach shows that it provides useful suggestions, which can effectively support programmers to write consistent names.

Page generated in 0.0863 seconds