• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 408
  • 7
  • Tagged with
  • 415
  • 415
  • 415
  • 340
  • 56
  • 47
  • 40
  • 39
  • 39
  • 39
  • 39
  • 34
  • 25
  • 20
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Knowledge Patterns for the Web: extraction, tranformation and reuse

Nuzzolese, Andrea Giovanni <1983> 19 May 2014 (has links)
This thesis aims at investigating methods and software architectures for discovering what are the typical and frequently occurring structures used for organizing knowledge in the Web. We identify these structures as Knowledge Patterns (KPs). KP discovery needs to address two main research problems: the heterogeneity of sources, formats and semantics in the Web (i.e., the knowledge soup problem) and the difficulty to draw relevant boundary around data that allows to capture the meaningful knowledge with respect to a certain context (i.e., the knowledge boundary problem). Hence, we introduce two methods that provide different solutions to these two problems by tackling KP discovery from two different perspectives: (i) the transformation of KP-like artifacts to KPs formalized as OWL2 ontologies; (ii) the bottom-up extraction of KPs by analyzing how data are organized in Linked Data. The two methods address the knowledge soup and boundary problems in different ways. The first method provides a solution to the two aforementioned problems that is based on a purely syntactic transformation step of the original source to RDF followed by a refactoring step whose aim is to add semantics to RDF by select meaningful RDF triples. The second method allows to draw boundaries around RDF in Linked Data by analyzing type paths. A type path is a possible route through an RDF that takes into account the types associated to the nodes of a path. Then we present K~ore, a software architecture conceived to be the basis for developing KP discovery systems and designed according to two software architectural styles, i.e, the Component-based and REST. Finally we provide an example of reuse of KP based on Aemoo, an exploratory search tool which exploits KPs for performing entity summarization.
72

Learning with Kernels on Graphs: DAG-based kernels, data streams and RNA function prediction.

Navarin, Nicolò <1984> 19 May 2014 (has links)
In many application domains data can be naturally represented as graphs. When the application of analytical solutions for a given problem is unfeasible, machine learning techniques could be a viable way to solve the problem. Classical machine learning techniques are defined for data represented in a vectorial form. Recently some of them have been extended to deal directly with structured data. Among those techniques, kernel methods have shown promising results both from the computational complexity and the predictive performance point of view. Kernel methods allow to avoid an explicit mapping in a vectorial form relying on kernel functions, which informally are functions calculating a similarity measure between two entities. However, the definition of good kernels for graphs is a challenging problem because of the difficulty to find a good tradeoff between computational complexity and expressiveness. Another problem we face is learning on data streams, where a potentially unbounded sequence of data is generated by some sources. There are three main contributions in this thesis. The first contribution is the definition of a new family of kernels for graphs based on Directed Acyclic Graphs (DAGs). We analyzed two kernels from this family, achieving state-of-the-art results from both the computational and the classification point of view on real-world datasets. The second contribution consists in making the application of learning algorithms for streams of graphs feasible. Moreover,we defined a principled way for the memory management. The third contribution is the application of machine learning techniques for structured data to non-coding RNA function prediction. In this setting, the secondary structure is thought to carry relevant information. However, existing methods considering the secondary structure have prohibitively high computational complexity. We propose to apply kernel methods on this domain, obtaining state-of-the-art results.
73

Opportunistic Data Gathering and Dissemination in Urban Scenarios

Bujari, Armir <1984> 19 May 2014 (has links)
In the era of the Internet of Everything, a user with a handheld or wearable device equipped with sensing capability has become a producer as well as a consumer of information and services. The more powerful these devices get, the more likely it is that they will generate and share content locally, leading to the presence of distributed information sources and the diminishing role of centralized servers. As of current practice, we rely on infrastructure acting as an intermediary, providing access to the data. However, infrastructure-based connectivity might not always be available or the best alternative. Moreover, it is often the case where the data and the processes acting upon them are of local scopus. Answers to a query about a nearby object, an information source, a process, an experience, an ability, etc. could be answered locally without reliance on infrastructure-based platforms. The data might have temporal validity limited to or bounded to a geographical area and/or the social context where the user is immersed in. In this envisioned scenario users could interact locally without the need for a central authority, hence, the claim of an infrastructure-less, provider-less platform. The data is owned by the users and consulted locally as opposed to the current approach of making them available globally and stay on forever. From a technical viewpoint, this network resembles a Delay/Disruption Tolerant Network where consumers and producers might be spatially and temporally decoupled exchanging information with each other in an adhoc fashion. To this end, we propose some novel data gathering and dissemination strategies for use in urban-wide environments which do not rely on strict infrastructure mediation. While preserving the general aspects of our study and without loss of generality, we focus our attention toward practical applicative scenarios which help us capture the characteristics of opportunistic communication networks.
74

Operating System Contribution to Composable Timing Behaviour in High-Integrity Real-Time Systems

Baldovin, Andrea <1983> 19 May 2014 (has links)
The development of High-Integrity Real-Time Systems has a high footprint in terms of human, material and schedule costs. Factoring functional, reusable logic in the application favors incremental development and contains costs. Yet, achieving incrementality in the timing behavior is a much harder problem. Complex features at all levels of the execution stack, aimed to boost average-case performance, exhibit timing behavior highly dependent on execution history, which wrecks time composability and incrementaility with it. Our goal here is to restitute time composability to the execution stack, working bottom up across it. We first characterize time composability without making assumptions on the system architecture or the software deployment to it. Later, we focus on the role played by the real-time operating system in our pursuit. Initially we consider single-core processors and, becoming less permissive on the admissible hardware features, we devise solutions that restore a convincing degree of time composability. To show what can be done for real, we developed TiCOS, an ARINC-compliant kernel, and re-designed ORK+, a kernel for Ada Ravenscar runtimes. In that work, we added support for limited-preemption to ORK+, an absolute premiere in the landscape of real-word kernels. Our implementation allows resource sharing to co-exist with limited-preemptive scheduling, which extends state of the art. We then turn our attention to multicore architectures, first considering partitioned systems, for which we achieve results close to those obtained for single-core processors. Subsequently, we shy away from the over-provision of those systems and consider less restrictive uses of homogeneous multiprocessors, where the scheduling algorithm is key to high schedulable utilization. To that end we single out RUN, a promising baseline, and extend it to SPRINT, which supports sporadic task sets, hence matches real-world industrial needs better. To corroborate our results we present findings from real-world case studies from avionic industry.
75

Leopoli-Cencelle beyond virtual reality Documentazione, interpretazione e comprensione di una città medievale / Leopoli-Cencelle beyond virtual reality Documentation, interpretation and comprehension of a medieval city

De Padova, Maria Doriana <1977> 14 May 2013 (has links)
La città medievale di Leopoli-Cencelle (fondata da Papa Leone IV nell‘854 d.C. non lontano da Civitavecchia) è stata oggetto di studio e di periodiche campagne di scavo a partire dal 1994. Le stratigrafie investigate con metodi tradizionali, hanno portato alla luce le numerose trasformazioni che la città ha subìto nel corso della sua esistenza in vita. Case, torri, botteghe e strati di vissuto, sono stati interpretati sin dall’inizio dello scavo basandosi sulla documentazione tradizionale e bi-dimensionale, legata al dato cartaceo e al disegno. Il presente lavoro intende re-interpretare i dati di scavo con l’ausilio delle tecnologie digitali. Per il progetto sono stati utilizzati un laser scanner, tecniche di Computer Vision e modellazione 3D. I tre metodi sono stati combinati in modo da poter visualizzare tridimensionalmente gli edifici abitativi scavati, con la possibilità di sovrapporre semplici modelli 3D che permettano di formulare ipotesi differenti sulla forma e sull’uso degli spazi. Modellare spazio e tempo offrendo varie possibilità di scelta, permette di combinare i dati reali tridimensionali, acquisiti con un laser scanner, con semplici modelli filologici in 3D e offre l’opportunità di valutare diverse possibili interpretazioni delle caratteristiche dell’edificio in base agli spazi, ai materiali, alle tecniche costruttive. Lo scopo del progetto è andare oltre la Realtà Virtuale, con la possibilità di analizzare i resti e di re-interpretare la funzione di un edificio, sia in fase di scavo che a scavo concluso. Dal punto di vista della ricerca, la possibilità di visualizzare le ipotesi sul campo favorisce una comprensione più profonda del contesto archeologico. Un secondo obiettivo è la comunicazione a un pubblico di “non-archeologi”. Si vuole offrire a normali visitatori la possibilità di comprendere e sperimentare il processo interpretativo, fornendo loro qualcosa in più rispetto a una sola ipotesi definitiva. / The medieval city of Leopoli-Cencelle (founded by Pope Leo IV in 854 AD, surrounded by walls, protected by towers, located on the top of a hill in not far from Civitavecchia - a harbour city about 60 kilometres North-East of Rome-) has been investigated since 1994 using traditional excavation. The stratigraphy brought to light the huge number of transformation the city encountered along its life-time (about 600 years). Houses, towers, workshops have been interpreted in the past years according to the two-dimensional drawn excavation data. The main goal of this work is to re-interpret excavation data using digital technologies. The experiment involves laser-scanning, computer vision (Structure From Motion) and 3D modelling technologies. A three-dimensional visualization of the remains acquired by a laser-scanner, is combined with 3D models allowing several interpretation hypotheses about the shape of the buildings and the purpose of the spaces. Modelling space and time as room of possibilities, combines scanned data with philological 3D reconstruction, and gives the opportunity to switch between several possible interpretation of the building features. The project aims to go beyond virtual reality, giving the opportunity to analyse the remains and to re-interpret the buildings’ purposes both during and after the excavation. From a research point of view, the visualization of hypothesis in the fieldwork process, provides a deeper understanding of an archaeological context. A second goal deals with the public communication, allowing also non-archaeologists to understand and experience the archaeological interpretation process, providing more than just one final hypothesis.
76

Privacy Preserving Enforcement of Sensitive Policies in Outsourced and Distributed Environments

Asghar, Muhammad Rizwan January 2013 (has links)
The enforcement of sensitive policies in untrusted environments is still an open challenge for policy-based systems. On the one hand, taking any appropriate security decision requires access to these policies. On the other hand, if such access is allowed in an untrusted environment then confidential information might be leaked by the policies. The key challenge is how to enforce sensitive policies and protect content in untrusted environments. In the context of untrusted environments, we mainly distinguish between outsourced and distributed environments. The most attractive paradigms concerning outsourced and distributed environments are cloud computing and opportunistic networks, respectively. In this dissertation, we present the design, technical and implementation details of our proposed policy-based access control mechanisms for untrusted environments. First of all, we provide full confidentiality of access policies in outsourced environments, where service providers do not learn private information about policies during the policy deployment and evaluation phases. Our proposed architecture is such that we are able to support expressive policies and take into account contextual information before making any access decision. The system entities do not share any encryption keys and even if a user is deleted, the system is still able to perform its operations without requiring any action. For complex user management, we have implemented a policy-based Role-Based Access Control (RBAC) mechanism, where users are assigned roles, roles are assigned permissions and users execute permissions if their roles are active in the session maintained by service providers. Finally, we offer the full-fledged RBAC policies by incorporating role hierarchies and dynamic security constraints. In opportunistic networks, we protect content by specifying expressive access control policies. In our proposed approach, brokers match subscriptions against policies associated with content without compromising privacy of subscribers. As a result, an unauthorised broker neither gains access to content nor learns policies and authorised nodes gain access only if they satisfy fine-grained policies specified by publishers. Our proposed system provides scalable key management in which loosely-coupled publishers and subscribers communicate without any prior contact. Finally, we have developed a prototype of the system that runs on real smartphones and analysed its performance.
77

Concept challenge game: a game used to find errors from a multilangual linguistic resource

Zhang, Hanyu January 2017 (has links)
Multilingual semantic linguistic resource is critical for many applications in Natural Language Processing (NLP). While, building large-scale lexico-semantic resources manually from scratch is extremely expensive, which promoted the applications of automatic extraction or merger algorithms. These algorithms did benefit us in creation of large-scale resources, but introduced many kinds of errors as the side effect. For example, Chinese WordNet follows the WordNet structure and is generated via several algorithms. This automatic generation of resources introduces many kinds of errors such as wrong translation, typos and false mapping between multilingual terms. The quality of a linguistic resource influences the performance of the further applications direct- ly, which means the quality of a linguistic resource should be the higher the better. Thus, finding errors is inevitable. However, till now, there is not any efficient method to find errors from a large-scale and multi- lingual resource. Validating manually by experts could be a solution, but it is very expensive, where the obstacles come from not only the large-scale dataset, but also multilingual. Even though crowdsourcing is a method for solving large-scale and tedious task, it is still costly. By thinking in this scenario, we plan to find an effective method that can help us finding errors in low cost. We use games as our solution and adopt Universal Knowledge Core (UKC) with respect to Chinese language as our case study. UKC is a multi-layered multilingual lexico-semantic resource where a common lexical element from a different language is mapped to a formal concept. In this dissertation, we present a non-immersive game named Concept Challenge Game to find the errors that exist in English-Chinese lexico-semantic resource. In this game, people will face challenges in English synsets and have to choose the most appropriate option from the listed Chinese synsets. The players are unaware when finding errors in the lexico-semantic resource. Our evaluation shows that people are spending a significant amount of time playing and able to find differ- ent erroneous mappings. Moreover, we further extended our game to Italian version, the result is promising as well, indicating that our game has the ability to figure out errors in multilingual linguistic resources.
78

Effectively Encoding SAT and Other Intractable Problems into Ising Models for Quantum Computing

Varotti, Stefano January 2019 (has links)
Quantum computing theory posits that a computer exploiting quantum mechanics can be strictly more powerful than classical models. Several quantum computing devices are under development, but current technology is limited by noise sensitivity. Quantum Annealing is an alternative approach that uses a noisy quantum system to solve a particular optimization problem. Problems such as SAT and MaxSAT need to be encoded to make use of quantum annealers. Encoding SAT and MaxSAT problems while respecting the constraints and limitations of current hardware is a difficult task. This thesis presents an approach to encoding SAT and MaxSAT problems that is able to encode bigger and more interesting problems for quantum annealing. A software implementation and preliminary evaluation of the method are described.
79

CMOS Readout Interfaces for MEMS Capacitive Microphones

Jawed, Syed Arsalan January 2009 (has links)
This dissertation demonstrates the feasibility of three novel low-power and low-noise schemes for the readout interfaces of MEMS Capacitive Microphones (MCM) by presenting their detailed design descriptions and measurement results as application-specific ICs (ASIC) in CMOS technology developed to exploit their application scope in consumer electronics and hearing aids. MCMs are a new generation of acoustic sensors, which offer a significant scope to improve miniaturization, integration and cost of the acoustic systems by leveraging the MEMS technology. Electret-Condenser-Microphones (ECM) are the current market solution for acoustic applications; however, MCMs are being considered as the future microphone-of-choice for mobile phones in consumer electronics and for hearing aids in medical applications. The readout interface of MCM in an acoustic system converts the output of the MEMS sensor into an appropriate electrical representation (analog or digital). The output of a MCM is in the form of capacitive-variations in femto-Farad range, which necessitates a low-noise signal-translation employed by the readout interface together with a low-power profile for its portable applications. The main focus of this dissertation is to develop novel readout schemes that are low-noise, low-power, low-cost and batch-producible, targeting the domains of consumer electronics and hearing-aids. The presented readout interfaces in this dissertation consist of a front-end, which is a preamplifier, and a backend which converts the output of the preamplifier into a digital representation. The first interface presents a bootstrapped preamplifier and a third-order sigma-delta modulator (SDM) for analog-to-digital conversion. The preamplifier is bootstrapped to the MCM by tying its output to the sensorâ€TMs substrate. This bootstrapping technique boosts the MCM signal by ~17dB and also makes the readout insensitive to the parasitic capacitors in MCM electro-mechanical structure, achieving 55dBA/Pa of SNDR. The third-order low-power SDM converts output of the PAMP into an over-sampled digital bitstream demonstrating a dynamic-range (DR) of 80dBA. This ASIC operates at 1.8V single-supply and 460uA of total current consumption; thus, highlighting the feasibility of low-power integrated MCM readout interface. This ASIC is also acoustically characterized with a MCM, bonded together in a single package, demonstrating a reasonable agreement with the expected performance. The second interface presents a readout scheme with force-feedback (FFB) for the MCM. The force-feedback is used to enhance the linearity of the MCM and minimize the impact of drift in sensor mechanical parameters. Due to the unavailability of the sensor, the effect of FFB could not be measured with an MCM; however, the presented results point out a significant performance improvement through FFB. The preamplifier in this ASIC utilizes a high-gain OTA in a capacitive-feedback configuration to achieve parasitic insensitive readout in an area and power-efficient way, achieving 40dBA/Pa of SNDR. The digital output of the third-order SDM achieved 76dBA of DR and was also used to apply the electrostatic FFB by modulating the bias voltage of the MCM. A dummy-branch with dynamic matching converted the single-ended MCM into a pseudo-differential sensor to make it compatible with force-feedback. This interface operates at 3.3V supply and consumes total current of 300uA. The third interface presents a chopper-stabilized multi-function preamplifier for MCM. Unlike typical MCM preamplifiers, this preamplifier employs chopper-stabilization to mitigate low-frequency noise and offset and it also embeds extra functionalities in the preamplifier core such as controllable gain, controllable offset and controllable high-pass filtering. This preamplifier consists of two stages; the first stage is a source-follower buffering the MCM output into a voltage signal and the second-stage is a chopper-stabilized controllable capacitive gain-stage. This preamplifier employs MΩ bias resistors to achieve consistent readout sensitivity over the audio band by utilizing the miller effect, avoiding the conditionally-linear GΩ bias resistors. The offset control functionality of this preamplifier can be used to modulate idle tones in the subsequent sigma-delta modulator out of the audio-band. The high-pass filtering functionality can be used to filter-out low-frequency noises such as wind-hum. This preamplifier operates at 1.8V and consumes total current of 50u with SNDR of 44dB/PA, demonstrating the feasibility of a low-power low-noise multifunction preamplifier for the MCM sensor.
80

Using Formal Methods for Building more Reliable and Secure e-voting Systems

Weldemariam, Komminist Sisai January 2010 (has links)
Deploying a system in a safe and secure manner requires ensuring the tech- nical and procedural levels of assurance also with respect to social and regu- latory frameworks. This is because threats and attacks may not only derive from pitfalls in complex security critical system, but also from ill-designed procedures. However, existing methodologies are not mature enough to em- brace procedural implications and the need for multidisciplinary approach on the safe and secure operation of system. This is particularly common in electronic voting (e-voting) systems. This dissertation focuses along two lines. First, we propose an approach to guarantee a reasonable security to the overall systems by performing for- mal procedural security analysis. We apply existing techniques and define novel methodologies and approaches for the analysis and verification of procedural rich systems. This includes not only the definition of adequate modeling convention, but also the definition of general techniques for the injection of attacks, and for the transformation of process models into rep- resentations that can be given as input to model checkers. With this it is possible to understand and highlight how the switch to the new tech- nological solution changes security, with the ultimate goal of defining the procedures regulating system and system processes that ensure a sufficient level of security for the system as well as for its procedures. We then investigate the usage of formal methods to study and analyze the strength and weaknesses of currently deployed (e-voting) system in order to build the next generation (e-voting) systems. More specifically, we show how formal verification techniques can be used to model and reason about the security of an existing e-voting system. To do that, we reuse the methodology propose for procedural security analysis. The practical applicability of the approaches is demonstrated in several case studies from the domain of public administrations in general and in e-voting system in particular. With this it can be possible to build more secure, reliable, and trustworthy e-voting system.

Page generated in 0.0648 seconds