• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1653
  • 332
  • 13
  • 10
  • 8
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 2009
  • 705
  • 485
  • 359
  • 345
  • 276
  • 249
  • 248
  • 234
  • 222
  • 221
  • 213
  • 189
  • 188
  • 175
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Privacy Preserving Enforcement of Sensitive Policies in Outsourced and Distributed Environments

Asghar, Muhammad Rizwan January 2013 (has links)
The enforcement of sensitive policies in untrusted environments is still an open challenge for policy-based systems. On the one hand, taking any appropriate security decision requires access to these policies. On the other hand, if such access is allowed in an untrusted environment then confidential information might be leaked by the policies. The key challenge is how to enforce sensitive policies and protect content in untrusted environments. In the context of untrusted environments, we mainly distinguish between outsourced and distributed environments. The most attractive paradigms concerning outsourced and distributed environments are cloud computing and opportunistic networks, respectively. In this dissertation, we present the design, technical and implementation details of our proposed policy-based access control mechanisms for untrusted environments. First of all, we provide full confidentiality of access policies in outsourced environments, where service providers do not learn private information about policies during the policy deployment and evaluation phases. Our proposed architecture is such that we are able to support expressive policies and take into account contextual information before making any access decision. The system entities do not share any encryption keys and even if a user is deleted, the system is still able to perform its operations without requiring any action. For complex user management, we have implemented a policy-based Role-Based Access Control (RBAC) mechanism, where users are assigned roles, roles are assigned permissions and users execute permissions if their roles are active in the session maintained by service providers. Finally, we offer the full-fledged RBAC policies by incorporating role hierarchies and dynamic security constraints. In opportunistic networks, we protect content by specifying expressive access control policies. In our proposed approach, brokers match subscriptions against policies associated with content without compromising privacy of subscribers. As a result, an unauthorised broker neither gains access to content nor learns policies and authorised nodes gain access only if they satisfy fine-grained policies specified by publishers. Our proposed system provides scalable key management in which loosely-coupled publishers and subscribers communicate without any prior contact. Finally, we have developed a prototype of the system that runs on real smartphones and analysed its performance.
2

Concept challenge game: a game used to find errors from a multilangual linguistic resource

Zhang, Hanyu January 2017 (has links)
Multilingual semantic linguistic resource is critical for many applications in Natural Language Processing (NLP). While, building large-scale lexico-semantic resources manually from scratch is extremely expensive, which promoted the applications of automatic extraction or merger algorithms. These algorithms did benefit us in creation of large-scale resources, but introduced many kinds of errors as the side effect. For example, Chinese WordNet follows the WordNet structure and is generated via several algorithms. This automatic generation of resources introduces many kinds of errors such as wrong translation, typos and false mapping between multilingual terms. The quality of a linguistic resource influences the performance of the further applications direct- ly, which means the quality of a linguistic resource should be the higher the better. Thus, finding errors is inevitable. However, till now, there is not any efficient method to find errors from a large-scale and multi- lingual resource. Validating manually by experts could be a solution, but it is very expensive, where the obstacles come from not only the large-scale dataset, but also multilingual. Even though crowdsourcing is a method for solving large-scale and tedious task, it is still costly. By thinking in this scenario, we plan to find an effective method that can help us finding errors in low cost. We use games as our solution and adopt Universal Knowledge Core (UKC) with respect to Chinese language as our case study. UKC is a multi-layered multilingual lexico-semantic resource where a common lexical element from a different language is mapped to a formal concept. In this dissertation, we present a non-immersive game named Concept Challenge Game to find the errors that exist in English-Chinese lexico-semantic resource. In this game, people will face challenges in English synsets and have to choose the most appropriate option from the listed Chinese synsets. The players are unaware when finding errors in the lexico-semantic resource. Our evaluation shows that people are spending a significant amount of time playing and able to find differ- ent erroneous mappings. Moreover, we further extended our game to Italian version, the result is promising as well, indicating that our game has the ability to figure out errors in multilingual linguistic resources.
3

Effectively Encoding SAT and Other Intractable Problems into Ising Models for Quantum Computing

Varotti, Stefano January 2019 (has links)
Quantum computing theory posits that a computer exploiting quantum mechanics can be strictly more powerful than classical models. Several quantum computing devices are under development, but current technology is limited by noise sensitivity. Quantum Annealing is an alternative approach that uses a noisy quantum system to solve a particular optimization problem. Problems such as SAT and MaxSAT need to be encoded to make use of quantum annealers. Encoding SAT and MaxSAT problems while respecting the constraints and limitations of current hardware is a difficult task. This thesis presents an approach to encoding SAT and MaxSAT problems that is able to encode bigger and more interesting problems for quantum annealing. A software implementation and preliminary evaluation of the method are described.
4

Monolithic and partitioned Rosenbrock-based time integration methods for dynamic substructure tests

Jia, Chuanguo January 2010 (has links)
Real-time testing with dynamic substructuring provides an efficient way to simulate the nonlinear dynamic behaviour of civil structures or mechanical facilities. In this technique, the test structure is divided onto two substructures: the relatively crucial substructure is tested physically and the other is modelled numerically in the computer. The key challenge is to ensure that both substructures interact in real-time, in order to simulate the behaviour of the emulated structure. This has special demands on the utilized integration methods and their implementations. Researchers have devoted significant effort to implement second-order integrators, such as Newmark integration methods, in a monolithic way where both substructures are integrated altogether. However, in view of large and complex structures, time integration methods are required to advance large-scale systems hence endowed with high-frequency components of the response or mixed first- and second- order systems like in the case of controlled systems. In this case, the monolithic implementation of a second-order time integration method becomes inefficient or inaccurate. With these promises, the thesis adopts the Rosenbrock-based time integration methods for both dynamic simulations of complex systems and substructure tests, and in particular, focuses on the development of monolithic schemes with subcycling strategies for nonlinear cases and partitioned methods with staggered and parallel solution procedures for linear and nonlinear cases. Initially, the Rosenbrock integration methods endowed with one stage to three stages are introduced and their applicabilities to second-order systems are investigated in terms of accuracy, stability and high-frequency dissipation, such as stability analysis of the Rosenbrock methods with one stage and two stages via the energy approach and numerical experiments on an uncoupled spring-pendulum system. Then, these methods are implemented in a monolithic way for real time substructure tests also considering subcycling strategies. Meanwhile, real-time substructure tests considering nonlinearities both in the numerical and physical substructures were carried out to illustrate the performances of the monolithic methods. Moreover, three types of partitioned algorithms based on the element-to-element partitioning are successively proposed. Two of them are based on acceleration continuity with a staggered solution procedure and a parallel solution procedure, respectively, and one of them is based on velocity continuity and a projection method. Both stability and accuracy properties of the proposed algorithms are examined by means of analytical techniques and numerical studies on single-, two-, three- and four-degree-of-freedom model problems and a coupled spring-pendulum system. Finally, a novel test rig conceived to perform both linear and nonlinear substructure tests with different combinations of numerical and physical substructures are presented and commented.
5

Hydrothermodynamics of a small alpine lake

Rizzi, Giuliano January 2004 (has links)
La presente ricerca è volta a indagare le proprietà idrotermodinamiche dei laghi alpini, con particolare riferimento al caso del lago di Tovel (Trentino, Italia). Tale lago era noto a livello internazionale a causa di unospettacolare fenomeno di arrossamento che si verificava ogni anno fino al 1964, quando il fenomeno è improvvisamente cessato. Parte del presente lavoro è stato sviluppato nell'ambito del progetto di ricerca "SALTO" finanziato dalla "Provincia Autonoma di Trento", orientato allo studio dei diversi aspetti ecologici, biologici e fisici del lago, insieme al suo sviluppo storico. La presente tesi è divisa in tre parti. Nella prima parte si riassumono i processi fondamentali che caratterizzano l' idrotermodinamica dei laghi e di tutti quei specifici aspetti che sono poi stati indagati anche sperimentalmente. Nella seconda parte si riporta una dettagliata descrizione della campagna di misure effettuata nel 2003 sul lago di Tovel. In particolare vengono illustrate le tecniche sperimentali e le procedure di post-processing dei dati. L'analisi dei dati di temperatura misurati dalle catene di termistori e dei dati di velocità raccolti attraverso profilatori e correntometri fornisce un significativo contributo alla comprensione delle principali circolazioni che possono svilupparsi nel lago. La terza parte è dedicata alla modellazione numerica: vengono riportati i risultati di un 1-D modello verticale e un modello 3-D. Il primo modello (DYRESM), sviluppato in Western Australia University, è stato utilizzato per studiare l'evoluzione stagionale del lago mentre il secondo modello 3-D è stato specificamente sviluppato nel quadro di questo progetto per studiare il comportamento idrodinamica su scala temporale breve. I risultati della simulazione mostrano notevole accordo con i dati della campagna di misura. [ENGLISH ABSTRACT] The present research is aimed at investigating the hydrothermodynamic properties of small alpine lakes, with specific reference to the case of lake Tovel (Trentino, Italy). The lake was subject to a spectacular reddening which had been occurring each year until 1964, when the phenomenon suddenly ceased. The work has been developed within the framework of the multidisciplinary research project "SALTO", funded by "Provincia Autonoma di Trento" which is oriented to the study of different ecological, biological and physical aspects of lake Tovel, along with his historical development. The present thesis is divided in three parts. In the first part we summarize the fundamental processes which characterize the physical limnology, which specific reference to those aspects that we have also investigated through field easurements. In the second part a detailed report is given on the procedures and results of the field campaign performed during the year 2003 on lake Tovel. In particular, the experimental techniques are illustrated along with the data post-processing procedures. The temperature data measured by thermistor chains and velocity data collected through current meter and profiler are then analysed to provide a quantitative estimate of the main circulations that may develop within the lake, due to different meteorological conditions. Finally, the third part is devoted to numerical modelling. Results of a 1-D vertical model and of a 3-D model are discussed. The former model (DYRESM), developed in Western Australia University, has been used to study the seasonal evolution of the lake; the latter has been specifically developed in the framework of this project to investigate the hydrodynamic behaviour of small lakes on short time scale.
6

CMOS Readout Interfaces for MEMS Capacitive Microphones

Jawed, Syed Arsalan January 2009 (has links)
This dissertation demonstrates the feasibility of three novel low-power and low-noise schemes for the readout interfaces of MEMS Capacitive Microphones (MCM) by presenting their detailed design descriptions and measurement results as application-specific ICs (ASIC) in CMOS technology developed to exploit their application scope in consumer electronics and hearing aids. MCMs are a new generation of acoustic sensors, which offer a significant scope to improve miniaturization, integration and cost of the acoustic systems by leveraging the MEMS technology. Electret-Condenser-Microphones (ECM) are the current market solution for acoustic applications; however, MCMs are being considered as the future microphone-of-choice for mobile phones in consumer electronics and for hearing aids in medical applications. The readout interface of MCM in an acoustic system converts the output of the MEMS sensor into an appropriate electrical representation (analog or digital). The output of a MCM is in the form of capacitive-variations in femto-Farad range, which necessitates a low-noise signal-translation employed by the readout interface together with a low-power profile for its portable applications. The main focus of this dissertation is to develop novel readout schemes that are low-noise, low-power, low-cost and batch-producible, targeting the domains of consumer electronics and hearing-aids. The presented readout interfaces in this dissertation consist of a front-end, which is a preamplifier, and a backend which converts the output of the preamplifier into a digital representation. The first interface presents a bootstrapped preamplifier and a third-order sigma-delta modulator (SDM) for analog-to-digital conversion. The preamplifier is bootstrapped to the MCM by tying its output to the sensorâ€TMs substrate. This bootstrapping technique boosts the MCM signal by ~17dB and also makes the readout insensitive to the parasitic capacitors in MCM electro-mechanical structure, achieving 55dBA/Pa of SNDR. The third-order low-power SDM converts output of the PAMP into an over-sampled digital bitstream demonstrating a dynamic-range (DR) of 80dBA. This ASIC operates at 1.8V single-supply and 460uA of total current consumption; thus, highlighting the feasibility of low-power integrated MCM readout interface. This ASIC is also acoustically characterized with a MCM, bonded together in a single package, demonstrating a reasonable agreement with the expected performance. The second interface presents a readout scheme with force-feedback (FFB) for the MCM. The force-feedback is used to enhance the linearity of the MCM and minimize the impact of drift in sensor mechanical parameters. Due to the unavailability of the sensor, the effect of FFB could not be measured with an MCM; however, the presented results point out a significant performance improvement through FFB. The preamplifier in this ASIC utilizes a high-gain OTA in a capacitive-feedback configuration to achieve parasitic insensitive readout in an area and power-efficient way, achieving 40dBA/Pa of SNDR. The digital output of the third-order SDM achieved 76dBA of DR and was also used to apply the electrostatic FFB by modulating the bias voltage of the MCM. A dummy-branch with dynamic matching converted the single-ended MCM into a pseudo-differential sensor to make it compatible with force-feedback. This interface operates at 3.3V supply and consumes total current of 300uA. The third interface presents a chopper-stabilized multi-function preamplifier for MCM. Unlike typical MCM preamplifiers, this preamplifier employs chopper-stabilization to mitigate low-frequency noise and offset and it also embeds extra functionalities in the preamplifier core such as controllable gain, controllable offset and controllable high-pass filtering. This preamplifier consists of two stages; the first stage is a source-follower buffering the MCM output into a voltage signal and the second-stage is a chopper-stabilized controllable capacitive gain-stage. This preamplifier employs MΩ bias resistors to achieve consistent readout sensitivity over the audio band by utilizing the miller effect, avoiding the conditionally-linear GΩ bias resistors. The offset control functionality of this preamplifier can be used to modulate idle tones in the subsequent sigma-delta modulator out of the audio-band. The high-pass filtering functionality can be used to filter-out low-frequency noises such as wind-hum. This preamplifier operates at 1.8V and consumes total current of 50u with SNDR of 44dB/PA, demonstrating the feasibility of a low-power low-noise multifunction preamplifier for the MCM sensor.
7

Using Formal Methods for Building more Reliable and Secure e-voting Systems

Weldemariam, Komminist Sisai January 2010 (has links)
Deploying a system in a safe and secure manner requires ensuring the tech- nical and procedural levels of assurance also with respect to social and regu- latory frameworks. This is because threats and attacks may not only derive from pitfalls in complex security critical system, but also from ill-designed procedures. However, existing methodologies are not mature enough to em- brace procedural implications and the need for multidisciplinary approach on the safe and secure operation of system. This is particularly common in electronic voting (e-voting) systems. This dissertation focuses along two lines. First, we propose an approach to guarantee a reasonable security to the overall systems by performing for- mal procedural security analysis. We apply existing techniques and define novel methodologies and approaches for the analysis and verification of procedural rich systems. This includes not only the definition of adequate modeling convention, but also the definition of general techniques for the injection of attacks, and for the transformation of process models into rep- resentations that can be given as input to model checkers. With this it is possible to understand and highlight how the switch to the new tech- nological solution changes security, with the ultimate goal of defining the procedures regulating system and system processes that ensure a sufficient level of security for the system as well as for its procedures. We then investigate the usage of formal methods to study and analyze the strength and weaknesses of currently deployed (e-voting) system in order to build the next generation (e-voting) systems. More specifically, we show how formal verification techniques can be used to model and reason about the security of an existing e-voting system. To do that, we reuse the methodology propose for procedural security analysis. The practical applicability of the approaches is demonstrated in several case studies from the domain of public administrations in general and in e-voting system in particular. With this it can be possible to build more secure, reliable, and trustworthy e-voting system.
8

STaRS.sys: designing and building a commonsense-knowledge enriched wordnet for therapeutic purposes

Lebani, Gianluca E. January 2012 (has links)
This thesis investigates the possibility to exploit human language resources and knowledge extraction techniques to build STaRS.sys, a software system designed to support therapists in the rehabilitation of Italian anomic patients. After an introductory section reviewing classification, assessment, and remediation methods for naming disorders, we analyze the current trends in the exploitation of computers for the rehabilitation of language disorders. Starting from an analysis of the needs of speech therapists in their daily work with aphasic patients, the requirements for the STaRS.sys application are defined, and a number of possible uses identified. To be able to implement these functionalities, STaRS.sys needs to be based on a lexical knowledge base encoding, in a explicit and computationally tractable way, at least the kind of semantic knowledge contained in the so called feature norms. As a backbone for the development of this semantic resource we chose to exploit the Italian MultiWordNet lexicon derived from the original Princeton WordNet. We show that the WordNet model is relatively well suited for our needs, but that an extension of its semantic model is nevertheless needed. Starting from the assumption that the kinds composing the feature types classifications exploited for encoding feature norms can be mapped onto semantic relations in a WordNet-like semantic network, we identified a set of 25 semantic relations that can cover all the information contained in these datasets. To demonstrate the feasibility of our proposal, we first asked to a group of therapists to use our feature types classification for classifying a set of 300 features. The analysis of the inter-coder agreement shows that the proposed classification can be used in a reliable way by speech therapists. Subsequently, we collected a new set of Italian feature norms for 50 concrete concepts and analyze the issues raised by the attempt to encode them into a version of MultiWordNet extended to include the new set of relations. This analysis shows that, in addition to extending the relation set, a number of further modifications are needed, for instance to be able to encode negation, quantifications or the strength of a relation. Information that, we will show, isn't well represented in the existing feature norms either. After defining an extended version of MultiWordNet (sMWN), suitable to encode the information contained in feature norms, we deal with the issue of automatic extraction of such semantic information from corpora. We applied to an Italian a corpus state of the art machine-learning-based method for the extraction of common-sense conceptual knowledge from corpora, previously applied to English. We tried a number of modifications and extensions of the original algorithm, with the aim of improving its accuracy. Results and limitations are presented and analyzed, and possible future improvement discussed.
9

Collecting Common Sense from text and People

Herdagdelen, Amac January 2011 (has links)
In order to display human-like intelligence, advanced computational systems should have access to the vast network of generic facts about the world that humans possess and that is known as commonsense knowledge (books have pages, grocery has a price, ...). Developers of AI applications have long been aware of this, and, for decades, they have invested in the laborious and expensive manual creation of commonsense knowledge repositories. An automated, high-throughput and low-noise method for commonsense collection still remains as the holy grail of AI. Two relatively recent developments in computer science and computational linguistics that may provide an answer to the commonsense collection problem are text mining from large amounts of data, something that has become possible with the massive availability of text on the Web, and human computation, which is a workaround technique implemented by outsourcing the 'hard' sub-steps of a problem to people. Text mining has been very successful in extracting huge amounts of commonsense knowledge from data, but the extracted knowledge tends to be extremely noisy. Human computation is also a challenging problem because people can provide unreliable data and may lack motivation to solve problems on behalf of researchers and engineers. A clever, and recently popularized, technique to motivate people to contribute to such projects it to pose the problems as entertaining games and let people solve those problems while they play a game. This technique, commonly known as games-with-a-purpose approach, has proved a very powerful way of recruiting laypeople on the Web. The focus of this thesis is to study methods to collect common sense from people via human computation and from text via text mining, and explore the opportunities in bringing these two types of methods together. The first contribution of my study is the introduction of a novel text miner trained on a set of known commonsense facts. The text miner is called BagPack and it is based on a vector-space representation of concept pairs, that also captures the relation between the pairs. BagPack harvests a large number of facts from Web-based corpora and these facts constitute a -- possibly noisy -- set of candidate facts. The second contribution of the thesis is Concept Game, a game with a purpose which is a simple slot-machine game that presents the candidate facts -- that are mined by BagPack -- to the players. Players are asked to recognize the meaningful facts and discard the meaningless facts in order to score points. Thus, as a result, laypeople verify the candidate set and we obtain a refined, high-quality dataset of commonsense facts. The evaluation of both systems suggests that text mining and human computation can work very efficiently in tandem. BagPack acts as an almost-endless source of candidate facts which are likely to be true, and Concept Game taps laypeople to verify these candidates. Using Web-based text as a source of commonsense knowledge has several advantages with respect to a purely human-computation system which relies on people as the source of information. Most importantly, we can tap domains that people do not talk about when they are directly asked. Also, relying on people just as a source of verification makes it possible to design fast-paced games with a low cognitive burden. The third issue that I addressed in this thesis is the subjective and stereotypical knowledge which constitutes an important part of our commonsense repository. Regardless of whether one would like to keep such knowledge in an AI system, being able to identify the subjectivity and detect the stereotypical knowledge is an important problem. As a case study, I focused on stereotypical gender expectations about actions. For this purpose, I created a gold standard of actions (e.g., pay bill, become nurse) rated by human judges on whether they are masculine or feminine actions. After that, I extracted, combined, and evaluated two different types of data to predict the gold standard. The first type of data depends on the metadata provided by social media (in particular, the genders of users in a microblogging site like Twitter) and the second one depends on Web-corpus-based pronoun/name gender heuristics. The metadata about the Twitter users helps us to identify which actions are mentioned more frequently by which gender. The Web-corpus-based score helps us to identify which gender is more frequently reported to be carrying out a given action. The evaluation of both methods suggests that 1) it is possible to predict the human gold standard with considerable success, 2) the two methods capture different aspects of stereotypical knowledge, and 3) they work best when combined together.
10

Concept Search: Semantics Enabled Information Retrieval

Kharkevich, Uladzimir January 2010 (has links)
The goal of information retrieval (IR) is to map a natural language query, which specifies the user information needs, to a set of objects in a given collection, which meet these needs. Historically, there have been two major approaches to IR that we call syntactic IR and semantic IR. In syntactic IR, search engines use words or multi-word phrases that occur in document and query representations. The search procedure, used by these search engines, is principally based on the syntactic matching of document and query representations. The precision and recall achieved by these search engines might be negatively affected by the problems of (i) polysemy, (ii) synonymy, (iii) complex concepts, and (iv) related concepts. Semantic IR is based on fetching document and query representations through a semantic analysis of their contents using natural language processing techniques and then retrieving documents by matching these semantic representations. Semantic IR approaches are developed to improve the quality of syntactic approaches but, in practice, results of semantic IR are often inferior to that of syntactic one. In this thesis, we propose a novel approach to IR which extends syntactic IR with semantics, thus addressing the problem of low precision and low recall of syntactic IR. The main idea is to keep the same machinery which has made syntactic IR so successful, but to modify it so that, whenever possible (and useful), syntactic IR is substituted by semantic IR, thus improving the system performance. As instances of the general approach, we describe the semantics enabled approaches to: (i) document retrieval, (ii) document classification, and (iii) peer-to-peer search.

Page generated in 0.0734 seconds