• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1678
  • 338
  • 13
  • 11
  • 8
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 2041
  • 707
  • 488
  • 365
  • 346
  • 279
  • 252
  • 251
  • 236
  • 225
  • 223
  • 216
  • 191
  • 189
  • 179
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Differential attacks using alternative operations and block cipher design

Civino, Roberto January 2018 (has links)
Block ciphers and their security are the main subjects of this work. In the first part it is described the impact of differential cryptanalysis, a powerful statistical attack against block ciphers, when operations different from the one used to perform the key addition are considered on the message space. It is proven that when an alternative difference operation is carefully designed, a cipher that is proved secure against classical differential cryptanalysis can instead be attacked using this alternative difference. In the second part it is presented a new design approach of round functions for block ciphers. The proposed round functions can give to the cipher a potentially better level of resistance against statistical attacks. It is also shown that the corresponding ciphers can be proven secure against a well-known algebraic attack, based on the action of the permutation group generated by the round functions of the cipher.
122

A Nomadicity-driven Negotiation Protocol, Tactics and Strategies for Interacting Software Agents.

Sameh, Abdel-Naby January 2010 (has links)
The rising integration of pocket computing devices in our daily life duties has taken the attention of researchers from different scientific backgrounds. Today's amount of software applications that bring together advanced mobile services and literature of Artificial Intelligence (AI) is quite remarkable and worth investigating. Cooperation, coordination and negotiation are some of AI's focal points wherein many of its related research efforts are strengthening the join between sophisticated research outcomes and modern life requirements, such as serviceability on the move. In Distributed Artificial Intelligence (DAI), several of the research conducted in Multi-Agent Systems (MASs) addresses the mutually beneficial agreements that a group of interacting autonomous agents are expected to reach. In our research, we look at agents as the transportable software packets that each represents a set of needs a user of a pocket computing device demands from a remote service acquisition platform. However, when a set of software agents attempt to reach an agreement, a certain level of cooperation must be reached first, then, a negotiation process is carried out. Depending on each agent's negotiation skills and considerations, the returns of each accomplished agreement can either be maximized or minimized. In this thesis, we introduce a new negotiation model, (i.e., protocol, set of tactics, strategy), for software agents to employ while attempting to acquire a service on behalf of users of pocket computing devices. The purpose of our model is to maximize the benefits of the interacting agents while considering the limitations of the communication technologies involved and, the nomadic nature of the users they represent. We show how our model can be generically implemented. Then, we introduce two case-studies that we have been working on with our industrial partner and, we demonstrate these cases' experimental results before and after applying our negotiation model.
123

Linguistically Motivated Reordering Modeling for Phrase-Based Statistical Machine Translation

Bisazza, Arianna January 2013 (has links)
Word reordering is one of the most difficult aspects of Statistical Machine Translation (SMT), and an important factor of its quality and efficiency. While short and medium-range reordering is reasonably handled by the phrase-based approach (PSMT), long-range reordering still represents a challenge for state-of-the-art PSMT systems. As a major cause of this problem, we point out the inadequacy of existing reordering constraints and models to cope with the reordering phenomena occurring between distant languages. On one hand, the reordering constraints used to control translation complexity appear to be too coarse-grained. On the other hand, the reordering models used to score different reordering decisions during translation are not discriminative enough to effectively guide the search over very large sets of hypotheses. In this thesis we propose several techniques to improve the definition of the reordering search space in PSMT by exploiting prior linguistic knowledge, so that long-range reordering may be adequately handled without sacrificing efficiency. In particular, we focus on Arabic-English and German-English: two language pairs characterized by uneven distributions of reordering phenomena, with long-range movements concentrating on few patterns. All our techniques aim at improving the definition of the reordering search space by exploiting prior linguistic knowledge, but they do this with different means: namely, chunk-based reordering rules and word reordering lattices, modified distortion matrices and early reordering pruning. Through extensive experiments, we show that our techniques can significantly advance the state of the art in PSMT for these challenging language pairs. When compared with a popoular tree-based SMT approach, our best PSMT systems achieve comparable or higher reordering accuracies while being considerably faster.
124

Provenance in Open Data Entity-Centric Aggregation

Abdelhamid Abdelnaby, Moaz January 2015 (has links)
An increasing number of web services these days require combining data from several data providers into an aggregated database. Usually this aggregation is based on the linked data approach. On the other hand, the entity-centric model is a promising data model that outperforms the linked data approach because it solves the lack of explicit semantics and the semantic heterogeneity problems. However, current open data which is available on the web as raw datasets can not be used in the entity-centric model before processing them with an import process to extract the data elements and insert them correctly in the aggregated entity-centric database. It is essential to certify the quality of these imported data elements, especially the background knowledge part which acts as input to semantic computations, because the quality of this part affects directly the quality of the web services which are built on top of it. Furthermore, the aggregation of entities and their attribute values from different sources raises three problems: the need to trace the source of each element, the need to trace the links between entities which can be considered equivalent and the need to handle possible conflicts between different values when they are imported from various data sources. In this thesis, we introduce a new model to certify the quality of a back ground knowledge base which separates linguistic and language independent elements. We also present a pipeline to import entities from open data repositories to add the missing implicit semantics and to eliminate the semantic heterogeneity. Finally, we show how to trace the source of attribute values coming from different data providers; how to choose a strategy for handling possible conflicts between these values; and how to keep the links between identical entities which represent the same real world entity.
125

On Semi-isogenous Mixed Surfaces

Cancian, Nicola January 2017 (has links)
Let C be a compact Riemann surface. Let us consider a finite group acting on CxC, having some elements that exchange the factors, and assume that the subgroup of those elements that do not exchange the factors acts freely. We call the quotient a Semi-isogenous Mixed Surface. In this work we investigate these surfaces and we explain how their geometry is encoded in the group. Based on this, we present an algorithm to classify the Semi-isogenous Mixed Surfaces with given geometric genus, irregularity and self-intersection of the canonical class. In particular we give the classification of Semi-isogenous Mixed Surfaces with K^2>0 and holomorphic Euler-Poincaré characteristic equal to 1, where new examples of minimal surfaces of general type appear. Minimality of Semi-isogenous Mixed Surfaces is discussed using two different approaches. The first one involves the study of the bicanonical system of such surfaces: we prove that we can relate the dimension of its first cohomology group to the rank of a linear map that involves only curves. The second approach exploits Hodge index theorem to bound the number of exceptional curves that live on a Semi-isogenous Mixed Surface.
126

Application Interference in Multi-Core Architectures: Analysis and Effects

Kandalintsev, Alexandre January 2016 (has links)
Clouds are an irreplaceable part of many business applications. They provide tremendous flexibility and gave birth for many related technologies – Software as a Service (SaaS) and the like. One of the biggest powers of clouds is load redistribution for scaling up and down on demand. This helps dealing with varying loads, increasing resource utilization and cutting down electricity bills while maintaining reasonable performance isolation. The last one is of our particular interest. Most cloud systems are accounted and billed not by useful throughput, but by resource usage. For example, a cloud provider may charge according to cumulative CPU time and/or average memory footprint. But this does not guarantee that the application realized its full performance potential because CPU and memory are shared resources. As a result, if there are many other applications it could experience frequent execution stalls due to contention on memory bus or cache pressure. The problem is more and more pronounced because modern hardware rapidly increases in density leading to more applications are co-located. The performance degradation caused by co-location of applications is called application interference. In this work we study in-depth reasons of interference as well as ways to mitigate it. The first part of the work is devoted to interference analysis and introduces a simple yet powerful empirical model of CPU performance that takes interference into account. The model is based on empirical observations and build up from extrapolation of a two-task (trivial) case. In the following part we present a method of ranking of virtual machines according to their average interference. The method is based on analysis of performance counters. We first launch a set of very diverse benchmark programs (to be representative for wide range of programs) one-by-one together with all sorts of performance counters. This gives us their “ideal” (isolated) performances. Then we run them in pairs to see the level of interference they create to each other. Once this is done, for each benchmark we calculate average interference. Finally we calculate the correlation between the average interference and performance counters. The counters with the biggest correlation are to be used as interference estimators. The final part deals with measuring interference in production environment with affordable overhead. The technique is based on short (in the order of milliseconds) freezes of virtual machines to see how they affect other VMs (hence the name of method – Freeze’nSense). By comparing the performance of the VM when other VMs active and when they frozen it is possible to conclude how much it looses in speed because of sharing hardware with other applications.
127

Security Risk Assessment Methods: An Evaluation Framework and Theoretical Model of the Criteria Behind Methods' Success

Labunets, Katsiaryna January 2016 (has links)
Over the past decades a significant number of methods to identify and mitigate security risks have been proposed, but there are few empirical evaluations that show whether these methods are actually effective. So how can practitioners decide which method is the best for security risk assessment of their projects? To this end, we propose an evaluation framework to compare security risk assessment methods that evaluates the quality of results of methods application with help of external industrial experts and can identify aspects having an effect on the successful application of these methods. The results of the framework application helped us to build the model of key aspects that impact the success of a security risk assessment. Among these aspects are i) the use of catalogues of threats and security controls which can impact methods' actual effectiveness and perceived usefulness and ii) the use of visual representation of risk models that can positively impact methods' perceived ease of use, but negatively affect methods' perceived usefulness if the visual representation is not comprehensible due to scalability issues. To further investigate these findings, we conducted additional empirical investigations: i) how different features of the catalogues of threats and security controls contribute into an effective risk assessment process for novices and experts in either domain or security knowledge, and ii) how comprehensible are different representation approaches for risk models (e.g. tabular and graphical).
128

The project-based method for a competence-based approach: teaching computer science in Italian secondary schools

Giaffredo, Silvio January 2018 (has links)
The competence-based approach to education has been found to be effective for teaching. Some countries have adopted it and subsequently reshaped the school system accordingly. Introduced in Italian secondary schools in 2010 by the Ministry for Education, the competence-based approach has only been partially adopted in the classes. Our research aims at discovering solutions to support the adoption of this approach for teachers in Italian secondary schools. A two steps approach has been investigated: 1) inclusion of teachers in the process of competence definition, and 2) support of teachers activity during student projects. The study focuses on computer science teachers, who often teach using the student projects. A software system for a participated definition of competences has been set up. A training course has been designed and implemented. Some student projects have been studied through teacher and student observation in the classroom and in laboratory. The results indicate a weak commitment on the part of teachers towards the competence-based approach. At the same time, exploiting students projects towards the project-based learning method encourages teachers to adopt a competence-based approach, provided the projects are carefully designed and effectively managed.
129

From states to objects and events through stratification: A formal account and an application to data analytics

Botti Benevides, Alessander January 2015 (has links)
In this thesis, we are mainly interested in representing and stratifying objects, states, and events. From an epistemological or perceptual perspective, states may be seen as "time stamped data" or "basic observations". Perception and cognition organize sensory outputs by grouping them in units that allow us to interact with the world in a quick and fruitful way. Similarly, states can be organized by synchronically grouping them into complex configurations of the world, or diachronically grouping them into events. We are specially interested in the logical forms of existential dependencies between states, objects and events. In the view of some philosophers, the world is stratified in levels by means of existential dependencies. This view is also present in theories of granularity, where (i) an abstraction mechanism allows for the simplification of fine-grained (lower level) entities into coarser (higher level) entities, stratifying entities in levels of abstraction; or (ii) data resolution is considered a criterion for stratifying data in levels of granularity, reflecting a more epistemological perspective on levels. We present here a framework for representing and stratifying objects, states, and events in levels. We show that our theory supports different notions of level, and suggest applications to data analytics.
130

Event Based Media Indexing

Tankoyeu, Ivan January 2013 (has links)
Multimedia data, being multidimensional by its nature, requires appropriate approaches for its organizing and sorting. The growing number of sensors for capturing the environmental conditions in the moment of media creation enriches data with context-awareness. This unveils enormous potential for eventcentred multimedia processing paradigm. The essence of this paradigm lies in using events as the primary means for multimedia integration, indexing and management. Events have the ability to semantically encode relationships of different informational modalities. These modalities can include, but are not limited to: time, space, involved agents and objects. As a consequence, media processing based on events facilitates information perception by humans. This, in turn, decreases the individual’s effort for annotation and organization processes. Moreover events can be used for reconstruction of missing data and for information enrichment. The spatio-temporal component of events is a key to contextual analysis. A variety of techniques have recently been presented to leverage contextual information for event-based analysis in multimedia. The content-based approach has demonstrated its weakness in the field of event analysis, especially for the event detection task. However content-based media analysis is important for object detection and recognition and can therefore play a role which is complementary to that of event-driven context recognition. The main contribution of the thesis lies in the investigation of a new eventbased paradigm for multimedia integration, indexing and management. In this dissertation we propose i) a novel model for event based multimedia representation, ii) a robust approach for mining events from multimedia and iii) exploitation of detected events for data reconstruction and knowledge enrichment.

Page generated in 0.0564 seconds