61 |
Air pollution modelling over complex topographyAntonacci, Gianluca January 2004 (has links)
The present study deals with air pollution modelling over complex topography, both from the phenomenological and numerical point of view. The theme of air pollution modelling has been faced at first from a phenomenological point of view. Then a numerical approach for the resolution of the diffusion-advection equation has been followed. Two different methods have been explored: puff-models and lagrangian particle models. The eulero-lagrangian puff-model CALPUFF (released by Earth Tech) has been used as a reference: closures and parametrizations adopted by this software have been tested over complex terrain and some minor changes have been introduced into the original code. A further step was the development of a lagrangian particle-tracking program, suitable for not homogenous not stationary flows, and also adapted to complex terrain cases, accounting for vertical skewed turrbulence in any atmospheric stability class. Langevin equation were solved following Thomson's (1987) approach. Special attention was put on near field dispersion processes. In fact, lagrangian models turn out to be the most advanced numerical schemes for pollutant transport simulations but at now only suitable for short term simulations, at least in complex errain where high spatial resolution is needed. An extension for the lagrangian model has been then developed, using the so called "kernel method"; this feature improves considerably the calculation performance, dramatically reducing computation time, so that simulations also become praticable for longer temporal scales; nevertheless it seems the kernel method seems to lead to unreliable results for narrow valleys or very steep slopes, so results cannot be generalized. Moreover, the problem of the determination of vertical profiles of turbulent diffusivity on complex orography has been faced. Both a local approach and a global one (suitable for compact valleys) for the estimate of eddy diffusivity in valley have been investigated. The first one has been adopted in the lagrangian problem previously developed. Since atmospheric turbulence is mostly generated by solar thermal flux, a procedure for the calculation of the effective solar radiation was developed. The method, which can be introduced into meteorological models which use complex orography as input, takes into account for shadowed areas, soil coverage and the possible precense of clouds which filter and reduce the incoming solar radiation. Tests have been carried out using a modified version of model CALMET (EarthTech Inc.). Results are in agreement with turbulence data acquired by means of a sonic anemometer during a field campain performed by the Department. Finally, the analysis of near field dispersion over complex terrain has been extended to the urban context, adopting, basically, the same conceptual tools on a smaller scale. A finite volume three-dimensional numerical model has been developed and tested in simulating dispersion of traffic derived pollutants in the town of Trento. For ground level sources geometry of the domain and emission condition turn out to be very important with respect to meteorological conditions (especially atmospheric stability). The roughness, i.e. the buildings of the study area has been therefore explicitely considered, using a high resolution deigital elevation map of the urban area. This approach has turned out to be necessary for near field dispersion, when the emission source is located inside the roughness and the impact area entirely fall inside the near field. Here a comparison has been made between the predicted numerical solution and data measured by air quality stations which are present in the urban area, showing a good agreement. A further refinement of the study has lead to the development of a two-dimensional x-z lagrangian model at the "street scale", for the study of canyon effects which tends to trap pollutant inside an urban canyon with behaviours which typically depends on geometric features, atmospheric turbulence and wind speed.
|
62 |
On Neighbors, Groups and Application Invariants in Mobile Wireless Sensor NetworksGuna, Stefan-Valentin January 2011 (has links)
The miniaturization and energy-efficient operation of wireless sensor networks (WSNs) provides unprecedented opportunities for monitoring mobile entities. The motivation for this thesis is drawn from real-world applications including monitoring wildlife, assisted living, and logistics. Nevertheless, mobility unveils a series of problems that do not arise in fixed scenarios. Through applications, we distill three of those, as follows. Neighbor discovery, or knowing the identity of surrounding nodes, is the precondition for any communication between nodes. As compared to other existing solutions, we provide a framework that approaches the problem from the perspectives of latency (the time required to detect an amount of contacts), lifetime (the time nodes are expected to last) and probability (the fraction of contacts guaranteed to be detected within a given latency). By formalizing neighbor discovery as an optimization problem, we obtain a significant improvement w.r.t. the state-of-art. We offer a solver providing the optimal configuration and an implementation for popular WSN devices. Group membership, or knowing the identity of the transitively connected nodes, can be either the direct answer to a requirement (e.g., caring for people that are not self-sufficient), or a building-block for higher-level abstractions. Earlier works on the same problem target either less constrained devices such as PDAs or laptops or, when targeting WSN devices, provide only post-deployment information on the group. Instead, we provide three protocols that cover the solution space. All our protocols empower each node with a run-time global view of the group composition. Finally, we focus on the behavior of the processes monitored by WSNs. We present a system that validates whether global invariants describing the safe behavior of a monitored system are satisfied. Although similar problems have been tackled before, the invariants we target are more complex and our system evaluates them in the network, at run-time. We focus on invariants that are expressed as first-order logic formulas over the state of multiple nodes. The requirement for monitoring invariants arises in both fixed and mobile environments; we design and implement an efficient solution for each. Noteworthy is that the solution targeting mobility bestows each node with an eventually consistent view on the satisfaction of the monitored invariants; in this context, the group membership algorithms play the role of global failure detectors.
|
63 |
Domain Modeling Theory and PracticeDas, Subhashis January 2018 (has links)
Everyday huge amount of data is being captured and stored. This can either be due to several social initiatives, technological advancement or by smart devices. This involves the release of data which differs in format, language, schema and standards from various types of user communities and organizations. The main challenge in this scenario lies in the integration of such diverse data and on the generator of knowledge from the existing sources. Various methodology for data modeling has been proposed by different research groups, under different approaches and based on the scenarios of the different domain of application. However, a few methodology elaborates the proceeding steps. As a result, there is lack of clarification how to handle different issues which occurs in the different phases of domain modeling. The aim of this research is to presents a scalable, interoperable, effective framework and a methodology for data modeling. The backbone of the framework is composed of a two-layer, schema and language, to tackle diversity. An entity-centric approach has been followed as a main notion of the methodology. A few aspects which have especially been emphasized are: modeling a flexible data integration schema, dealing with the messy data source, alignment with an upper ontology and implementation. We evaluated our methodology from the user perspective to check its practicability.
|
64 |
Classifying semisimple orbits of theta-groupsOriente, Francesco January 2012 (has links)
I consider the problem of classifying the semisimple orbits of a theta-group. For this purpose, once a preliminary presentation of the theoretical subjects where my problem arises from, I first give an algorithm to compute a Cartan subspace; subsequently I describe how to compute the little Weyl group.
|
65 |
Empirical Methods for Evaluating Vulnerability ModelsNguyen, Viet Hung January 2014 (has links)
This dissertation focuses on the following research question: “how to independently and systematically validate empirical vulnerability models?”. Based on the survey of past studies about the vulnerability discovery process, the dissertation has pointed out several critical issues in the traditional methodology for evaluating the performance of vulnerability discovery models (VDMs). Such issues did impact the conclusions of several studies in the literature. To address such pitfalls, a novel empirical methodology and a data collection infrastructure are proposed to conduct experiments that evaluate the empirical performance of VDMs. The methodology consists of two quantitative analyses, namely quality and predictability analyses, which enable analysts to study the performance of VDMs, and to compare them effectively.The proposed methodology and the data collection infrastructure have been used to assess several existing VDMs on many major versions of the major browsers (i.e., Chrome, Firefox, Internet Explorer, and Safari). The extensive experimental analysis reveals an interesting finding about the VDM performance in terms of quality and predictability: the simplest linear model is the most appropriate one for predicting vulnerability discovery trend within the first twelve months since the release date of browser versions; later than that, logistic models are more appropriate. The analyzed vulnerability data exhibits the phenomenon of after-life vulnerabilities, which have been discovered for the current version, but also attributed to browser versions out of support – dead versions. These vulnerabilities, however, may not actually exist, and may have an impact on past scientific studies, or on compliance assessment. Therefore, this dissertation has proposed a method to identify code evidence for vulnerabilities. The results of the experiments show that a significant amount of vulnerabilities has been systematically over-reported for old versions of browsers. Consequently, old versions of software seem to have less vulnerabilities than reported
|
66 |
Inference with Distributional Semantic ModelsKruszewski Martel, German David January 2016 (has links)
Distributional Semantic Models have emerged as a strong theoretical and practical approach to model the meaning of words. Indeed, an increasing body of work has proved their value in accounting for a wide range of semantic phenomena. Yet, it is still unclear how we can use the semantic information contained in these representations to support the natural inferences that we produce in our every day usage of natural language. In this thesis, I explore a selection of challenging relations that exemplify these inferential processes. To this end, on one hand, I present new publicly available datasets to allow for their empirical treatment. On the other, I introduce computational models that can account for these relations using distributional representations as their conceptual knowledge repository. The performance of these models demonstrate the feasibility of this approach while leaving room for improvement in future work.
|
67 |
From Energy Efficient to Energy Neutral Wireless Sensor NetworksRaza, Usman January 2015 (has links)
Energy autonomy for Wireless Sensor Networks (WSNs) is a key to involve industry stakeholders willing to spend billions on the Internet of Things. By offering the lifetime of only a few years, traditional battery powered WSNs are neither practical nor profitable due to their high maintenance cost. Powering WSNs with energy harvesters can overcome this limitation and increase mean time-to-maintenance to tens of years. However, the primary challenge in realizing an energy neutral operation is to reduce the consumed energy drastically to match with the harvested energy. This dissertation proposes techniques to minimize the overhead of two main activities: communication and sampling. It does so by making a key observation: a plethora of applications can accept low accuracy of sensed phenomenon without sacrificing the application requirements. This fact enables us to reduce consumed energy by radically revising the network stack design, all the way from the application layer to underlying hardware. At the application layer, the relaxed requirements make it possible to propose techniques to reduce the data exchanges among the nodes, the most power hungry operation in WSNs. For example, we propose a simple yet efficient prediction based data collection technique called Derivative-Based Prediction (DBP) that enables data suppression up to 99%. With the remaining ultra-low application data rate, a full system-wide evaluation reveals that the dominating overhead of the lower layers greatly limits the gains enabled by DBP. A cross-layer optimization of the network stack is then designed specifically to strip off the unnecessary overhead to gain one order of magnitude longer lifetime. Although a huge saving in relative terms, the resulting power consumption is still much higher than tens of microwatts, the power usually achievable from a reasonably sized harvester deployed in an indoor environment. Therefore, we consider a novel combination of hardware components to further reduce power consumption. Our work demonstrates that using wake-up receivers along with DBP results in long idle periods with only rare occurrences of power hungry states such as radio transmissions and receptions. Low power modes, provided by various components of the underlying hardware platform, are adopted in the idle periods to conserve energy. In concrete real-world case studies, the lifetime is estimated to improve by two orders of magnitude. Thanks to the software and hardware features proposed above, the overall power consumption is reduced to a point where the sampling cost constitutes a significant portion of it. To reduce the cost of sampling, we introduce the concept of Model-based Sensing in which we push prediction based data collection as close as possible to the hardware sensing elements. This hardware-software co-design results in a system that consumes only a few microwatts, a point where even harvesters deployed in challenging indoor conditions can sustain the operation of nodes. This dissertation advances the state of art on energy efficient WSNs in several dimensions. First, it bridges the gap between theory and practice by providing the first ever system-wide evaluation of prediction based data collection in real-world WSNs. Second, new software based optimizations and novel hardware components are proposed that can deliver three orders of magnitude reduction in power consumption. Third, it provides tools to estimate the harvestable energy in real WSNs. By using these tools, the work highlights that the energy consumed by the proposed mechanisms is indeed lower than the energy harvested. By closing the gap between supply and demand of energy, the dissertation takes a concrete step in the direction of achieving completely energy neutral WSNs.
|
68 |
A User Centric Interface for the Management of Past, Present and Future Events [PhD thesis presentation]Hasan, Khandaker Tabin January 2011 (has links)
Events have been categorized, modeled and recorded by researchers and practitioners for many centuries. Life and events are always being the philosopher's topic of debate. For us, event is any happening worth of remembering. This thesis makes an in-depth philological query of the nature of events and their intricate relationship to other events in the tapestry of complex social structures. We tried to understand our life events from grainy to vast in nature and size. Causation and effects are investigated and a simplified model is proposed for a user centric personal event management system which is fundamentally different from any existing system. Facts as a priori and stories as a posteriori has been separated by formal definition. Novel visualization and interaction is proposed to meet every individual's needs. The concept of lifelines has been introduced for the organizational requirements of a single person's life events that made it possible to distinguish from being the part of an event and being the witness of an event. This visualization model made it easier to manage causal relationships between events. Rich and intuitive interaction has been developed and proposed through the user-centric design process.
|
69 |
An Online Peer-Assessment Methodology for Improved Student Engagement and Early InterventionAshenafi, Michael Mogessie January 2017 (has links)
Student performance is commonly measured using summative assessment methods such as midterms and final exams as well as high-stakes testing. Although not as common, there are other methods of gauging student performance. Formative assessment is a continuous, student-oriented form of assessment, which focuses on helping students improve their performance through continuous engagement and constant measurement of progress. One assessment practice that has been in use for decades in such a manner is peer-assessment. This form of assessment relies on having students evaluate the works of their peers. The level of education in which peer-assessment is used may vary across practices. The research discussed here was conducted in a higher education setting. Despite its cross-domain adoption and longevity, peer-assessment has been a practice difficult to utilize in courses with a high number of students. This directly stems from the fact that it has been used in traditional classes, where assessment is usually carried out using pen and paper. In courses with hundreds of students, such manual forms of peer-assessment would require a significant amount of time to complete. They would also contribute much to both student and instructor load. Automated peer-assessment, on the other hand, has the advantage of reducing, if not eliminating, many of the issues relating to efficiency and effectiveness of the practice. Moreover, its potential to scale up easily makes it a promising platform for conducting large-scale experiments or replicating existing ones. The goal of this thesis is to examine how the potential of automated peer-assessment may be exploited to improve student engagement and to demonstrate how a well-designed peer-assessment methodology may help teachers identify at-risk students in a timely manner. A methodology is developed to demonstrate how online peer-assessment may elicit continuous student engagement. Data collected from a web-based implementation of this methodology are then used to construct several models that predict student performance and monitor progress, highlighting the role of peer-assessment as a tool of early intervention. The construction of open datasets from online peer-assessment data gathered from five undergraduate computer science courses is discussed.
Finally, a promising role of online peer-assessment in measuring levels of student proficiency and test item difficulty is demonstrated by applying a generic Item Response Theory model to the peer-assessment data.
|
70 |
Spoken Language Understanding: from Spoken Utterances to Semantic StructuresDinarelli, Marco January 2010 (has links)
In the past two decades there have been several projects on Spoken Language Understanding (SLU).
In the early nineties DARPA ATIS project aimed at providing a natural language interface to a travel information database.
Following the ATIS project, DARPA Communicator project aimed at building a spoken dialog system automatically providing information on flights and travel reservation.
These two projects defined a first generation of conversational systems.
In late nineties ``How may I help you'' project from AT\&T,
with Large Vocabulary Continuous Speech Recognition (LVCSR) and mixed initiatives spoken interfaces,
started the second generation of conversational systems,
which later have been improved integrating approaches based on machine learning techniques.
The European funded project LUNA aims at starting the third generation of spoken language interfaces.
In the context of this project we have acquired the first Italian corpus of spontaneous speech from real users engaged in a problem solving task, as opposed to previous projects.
The corpus contains transcriptions and annotations based on a new multilevel protocol studied specifically for the goal of the LUNA project.
The task of Spoken Language Understanding is the extraction of the meaning structure from spoken utterances in conversational systems. For this purpose, two main statistical learning paradigms have been proposed in the last decades: generative and discriminative models.
The former are robust to over-fitting and they are less affected by noise but they cannot easily integrate complex structures (e.g. trees). In contrast, the latter can easily integrate very complex features that can capture arbitrarily long distance dependencies. On the other hand they tend to over-fit training data and so they are less robust to annotation errors in the data needed to learn the model.
This work presents an exhaustive study of Spoken Language Understanding models, putting particular focus on structural features used in a Joint Generative and Discriminative learning framework. This combines the strengths of both approaches while training segmentation and labeling models for SLU. Its main characteristic is the use of Kernel Methods to encode structured features in Support Vector Machines, which in turn re-rank the hypotheses produced by an first step SLU module based either on Stochastic Finite State Transducers or Conditional Random Fields. Joint models based on transducers are also amenable to decode word lattices generated by large vocabulary speech recognizers.
We show the benefit of our approach with comparative experiments among generative, discriminative and joint models on some of the most representative corpora of SLU, for a total of four corpora in four different languages: the ATIS corpus (English), the MEDIA corpus (French) and the LUNA Italian and Polish corpora (Italian and Polish respectively). These also represent three different kinds of domain applications, i.e. informational, transactional and problem-solving domains.
The results, although depending on the task and in some range on the first model baseline, show that joint models improve in most cases the state-of-the-art, especially when a small training set is available.
|
Page generated in 0.053 seconds