• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1661
  • 333
  • 13
  • 10
  • 8
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 2018
  • 707
  • 486
  • 359
  • 346
  • 277
  • 250
  • 249
  • 234
  • 222
  • 222
  • 214
  • 189
  • 188
  • 175
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Into the city: A Multi-Disciplinary Investigation of Urban Life

De Nadai, Marco January 2019 (has links)
Cities are essential crucibles for innovation, novelty, economic prosperity and diversity. They are not a mere reflection of individual characteristics, but instead the result of a complex interaction between people and space. Yet, little is known about this self-organized and complex relationship. Traditional approaches have either used surveys to explain in detail how a few individuals experience bits of a city, or considered cities as a whole from their outputs (e.g. total crimes). This tide has however tuned in recent years: the availability of new sources of data have allowed to observe, describe, and predict human behaviour in cities at an unprecedented scale and detail. This thesis adopts a "data mining approach" where we study urban spaces combining new sources of automatically collected data and novel statistical methods. Particularly, we focus on the relationship between the built environment, described by census information, geographical data, and images, and human behaviour proxied by extracted from mobile phone traces. The contribution of our thesis is two-fold. First, we present novel methods to describe urban vitality, by collecting and combining heterogeneous data sources. Second, we show that, by studying the built environment in conjunction with human behaviour, we can reliably estimate the effect of neighbourhood characteristics, predict housing prices and crime. Our results are relevant to researchers within a broad range of fields, from computer science to urban-planning and criminology, as well as to policymakers. The thesis is structured in two parts. In the first part, we investigate what creates urban life. We operationalize the theory of Jane Jacobs, one of the most famous authors in urban planning, who suggested that the built environment and vitality are intimately connected. Using Web and Open data to describe neighbourhoods, and mobile phone records to proxy urban vitality, we show that it is possible to predict vitality from the built environment, thus confirming Jacob's theory. Also, we investigate the effect of safety perception on urban vitality by introducing a novel Deep Learning model that relies on street-view images to extract security perception. Our model describes how perception modulates the relative presence of females, elderly and younger people in urban spaces. Altogether, we demonstrate how objective and subjective characteristics describe urban life. In the second part of this dissertation, we outline two studies that stress the importance of considering, at the same time, multiple factors to describe cities. First, we build a predictive model showing that objective and subjective neighbourhood features drive more than 20% of the housing price. Second, we describe the effect played by a neighbourhood's characteristics on the presence of crime. We present a Bayesian method to compare two alternative criminology theory, and show that the best description is achieved by considering together socio-economic characteristics, built-environment, and mobility of people. Also, we highlight the limitations of transferring what we learn from one city to another. Our findings show that new sources of data, automatically sensed from the environment, can complement the lengthy and costly survey-based collection of urban data and reliably describe neighbourhoods at an unprecedented scale and breath. We anticipate that our results will be of interest to researchers in computer science, urban planning, sociology and more broadly, computational social science.
62

An Effective End-User Development Approach through Domain-Specific Mashups for Research Impact Evaluation

Imran, Muhammad January 2013 (has links)
Over the last decade, there has been growing interest in the assessment of the performance of researchers, research groups, universities and even countries. The assessment of productivity is an instrument to select and promote personnel, assign research grants and measure the results of research projects. One particular assessment approach is bibliometrics i.e., the quantitative analysis of scientific publications through citation and content analysis. However, there is little consensus today on how research evaluation should be performed, and it is commonly acknowledged that the quantitative metrics available today are largely unsatisfactory. The process is very often highly subjective, and there are no universally accepted criteria. A number of dierent scientific data sources available on the Web (e.g., DBLP, Microsoft Academic Search, Google Scholar) that are used for such analysis purposes. Taking data from these diverse sources, performing the analysis and visualizing results in different ways is not a trivial and straight forward task. Moreover, the data taken from these sources cannot be used as it is due to the problem of name disambiguation, where many researchers share identical names or an author dierent name variations appear in the data. We believe that the personalization of the evaluation processes is a key element for the appropriate use and practical success of these research impact evaluation tasks. Moreover, people involved in such evaluation processes are not always IT experts and hence not capable to crawl data sources, merge them and compute the needed evaluation procedures. The recent emergence of mashup tools has refueled research on end-user development, i.e., on enabling end-users without programming skills to produce their own applications. Yet, similar to what happened with analogous promises in web service composition and business process management, research has mostly focused on technology and, as a consequence, has failed its objective. Plain technology (e.g., SOAP/WSDL web services) or simple modeling languages (e.g., Yahoo! Pipes) do not convey enough meaning to non-programmers. We believe that the heart of the problem is that it is impractical to design tools that are generic enough to cover a wide range of application domains, powerful enough to enable the specification of non-trivial logic, and simple enough to be actually accessible to non-programmers. At some point, we need to give up something. In our view, this something is generality since reducing expressive power would mean supporting only the development of toy applications, which is useless, while simplicity is our major aim. This thesis presents a novel approach for an effective end-user development, specifically for non-programmers. That is, we introduce a domain-specific approach to mashups that "speaks the language of users", i.e., that is aware of the terminology, concepts, rules, and conventions (the domain) the user is comfortable with. We show what developing a domain-specific mashup platform means, which role the mashup meta-model and the domain model play and how these can be merged into a domain-specific mashup metamodel. We illustrate the approach by implementing a generic mashup platform, whose capabilities are based on our proposed mashup meta-model. Further, we illustrate how the generic mashup platform can be tailored for a specific domain, which is achieved through the development of ResEval Mash tool that is specifically developed for the research evaluation domain. Moreover, the thesis proposed an architectural design for mashup platforms, specifically it presents a novel approach for data-intensive mashup-based web applications, which proved to be a substantial contribution. The proposed approach is suitable for those applications, which deal with large amounts of data that travel between client and server. For the evaluation of our work and to determine the effectiveness and usability of our mashup tool, we performed two separate user studies. The results of the user studies confirm that domain-specific mashup tools indeed lower the entry barrier for non-technical users in mashup development. The methodology presented in this thesis is generic and can be applied for other domains. Moreover, following the methodological approach the developed mashup platform is also generic, that is, it can be tailored for other domains.
63

Air pollution modelling over complex topography

Antonacci, Gianluca January 2004 (has links)
The present study deals with air pollution modelling over complex topography, both from the phenomenological and numerical point of view. The theme of air pollution modelling has been faced at first from a phenomenological point of view. Then a numerical approach for the resolution of the diffusion-advection equation has been followed. Two different methods have been explored: puff-models and lagrangian particle models. The eulero-lagrangian puff-model CALPUFF (released by Earth Tech) has been used as a reference: closures and parametrizations adopted by this software have been tested over complex terrain and some minor changes have been introduced into the original code. A further step was the development of a lagrangian particle-tracking program, suitable for not homogenous not stationary flows, and also adapted to complex terrain cases, accounting for vertical skewed turrbulence in any atmospheric stability class. Langevin equation were solved following Thomson's (1987) approach. Special attention was put on near field dispersion processes. In fact, lagrangian models turn out to be the most advanced numerical schemes for pollutant transport simulations but at now only suitable for short term simulations, at least in complex errain where high spatial resolution is needed. An extension for the lagrangian model has been then developed, using the so called "kernel method"; this feature improves considerably the calculation performance, dramatically reducing computation time, so that simulations also become praticable for longer temporal scales; nevertheless it seems the kernel method seems to lead to unreliable results for narrow valleys or very steep slopes, so results cannot be generalized. Moreover, the problem of the determination of vertical profiles of turbulent diffusivity on complex orography has been faced. Both a local approach and a global one (suitable for compact valleys) for the estimate of eddy diffusivity in valley have been investigated. The first one has been adopted in the lagrangian problem previously developed. Since atmospheric turbulence is mostly generated by solar thermal flux, a procedure for the calculation of the effective solar radiation was developed. The method, which can be introduced into meteorological models which use complex orography as input, takes into account for shadowed areas, soil coverage and the possible precense of clouds which filter and reduce the incoming solar radiation. Tests have been carried out using a modified version of model CALMET (EarthTech Inc.). Results are in agreement with turbulence data acquired by means of a sonic anemometer during a field campain performed by the Department. Finally, the analysis of near field dispersion over complex terrain has been extended to the urban context, adopting, basically, the same conceptual tools on a smaller scale. A finite volume three-dimensional numerical model has been developed and tested in simulating dispersion of traffic derived pollutants in the town of Trento. For ground level sources geometry of the domain and emission condition turn out to be very important with respect to meteorological conditions (especially atmospheric stability). The roughness, i.e. the buildings of the study area has been therefore explicitely considered, using a high resolution deigital elevation map of the urban area. This approach has turned out to be necessary for near field dispersion, when the emission source is located inside the roughness and the impact area entirely fall inside the near field. Here a comparison has been made between the predicted numerical solution and data measured by air quality stations which are present in the urban area, showing a good agreement. A further refinement of the study has lead to the development of a two-dimensional x-z lagrangian model at the "street scale", for the study of canyon effects which tends to trap pollutant inside an urban canyon with behaviours which typically depends on geometric features, atmospheric turbulence and wind speed.
64

On Neighbors, Groups and Application Invariants in Mobile Wireless Sensor Networks

Guna, Stefan-Valentin January 2011 (has links)
The miniaturization and energy-efficient operation of wireless sensor networks (WSNs) provides unprecedented opportunities for monitoring mobile entities. The motivation for this thesis is drawn from real-world applications including monitoring wildlife, assisted living, and logistics. Nevertheless, mobility unveils a series of problems that do not arise in fixed scenarios. Through applications, we distill three of those, as follows. Neighbor discovery, or knowing the identity of surrounding nodes, is the precondition for any communication between nodes. As compared to other existing solutions, we provide a framework that approaches the problem from the perspectives of latency (the time required to detect an amount of contacts), lifetime (the time nodes are expected to last) and probability (the fraction of contacts guaranteed to be detected within a given latency). By formalizing neighbor discovery as an optimization problem, we obtain a significant improvement w.r.t. the state-of-art. We offer a solver providing the optimal configuration and an implementation for popular WSN devices. Group membership, or knowing the identity of the transitively connected nodes, can be either the direct answer to a requirement (e.g., caring for people that are not self-sufficient), or a building-block for higher-level abstractions. Earlier works on the same problem target either less constrained devices such as PDAs or laptops or, when targeting WSN devices, provide only post-deployment information on the group. Instead, we provide three protocols that cover the solution space. All our protocols empower each node with a run-time global view of the group composition. Finally, we focus on the behavior of the processes monitored by WSNs. We present a system that validates whether global invariants describing the safe behavior of a monitored system are satisfied. Although similar problems have been tackled before, the invariants we target are more complex and our system evaluates them in the network, at run-time. We focus on invariants that are expressed as first-order logic formulas over the state of multiple nodes. The requirement for monitoring invariants arises in both fixed and mobile environments; we design and implement an efficient solution for each. Noteworthy is that the solution targeting mobility bestows each node with an eventually consistent view on the satisfaction of the monitored invariants; in this context, the group membership algorithms play the role of global failure detectors.
65

Domain Modeling Theory and Practice

Das, Subhashis January 2018 (has links)
Everyday huge amount of data is being captured and stored. This can either be due to several social initiatives, technological advancement or by smart devices. This involves the release of data which differs in format, language, schema and standards from various types of user communities and organizations. The main challenge in this scenario lies in the integration of such diverse data and on the generator of knowledge from the existing sources. Various methodology for data modeling has been proposed by different research groups, under different approaches and based on the scenarios of the different domain of application. However, a few methodology elaborates the proceeding steps. As a result, there is lack of clarification how to handle different issues which occurs in the different phases of domain modeling. The aim of this research is to presents a scalable, interoperable, effective framework and a methodology for data modeling. The backbone of the framework is composed of a two-layer, schema and language, to tackle diversity. An entity-centric approach has been followed as a main notion of the methodology. A few aspects which have especially been emphasized are: modeling a flexible data integration schema, dealing with the messy data source, alignment with an upper ontology and implementation. We evaluated our methodology from the user perspective to check its practicability.
66

Classifying semisimple orbits of theta-groups

Oriente, Francesco January 2012 (has links)
I consider the problem of classifying the semisimple orbits of a theta-group. For this purpose, once a preliminary presentation of the theoretical subjects where my problem arises from, I first give an algorithm to compute a Cartan subspace; subsequently I describe how to compute the little Weyl group.
67

Empirical Methods for Evaluating Vulnerability Models

Nguyen, Viet Hung January 2014 (has links)
This dissertation focuses on the following research question: “how to independently and systematically validate empirical vulnerability models?”. Based on the survey of past studies about the vulnerability discovery process, the dissertation has pointed out several critical issues in the traditional methodology for evaluating the performance of vulnerability discovery models (VDMs). Such issues did impact the conclusions of several studies in the literature. To address such pitfalls, a novel empirical methodology and a data collection infrastructure are proposed to conduct experiments that evaluate the empirical performance of VDMs. The methodology consists of two quantitative analyses, namely quality and predictability analyses, which enable analysts to study the performance of VDMs, and to compare them effectively.The proposed methodology and the data collection infrastructure have been used to assess several existing VDMs on many major versions of the major browsers (i.e., Chrome, Firefox, Internet Explorer, and Safari). The extensive experimental analysis reveals an interesting finding about the VDM performance in terms of quality and predictability: the simplest linear model is the most appropriate one for predicting vulnerability discovery trend within the first twelve months since the release date of browser versions; later than that, logistic models are more appropriate. The analyzed vulnerability data exhibits the phenomenon of after-life vulnerabilities, which have been discovered for the current version, but also attributed to browser versions out of support – dead versions. These vulnerabilities, however, may not actually exist, and may have an impact on past scientific studies, or on compliance assessment. Therefore, this dissertation has proposed a method to identify code evidence for vulnerabilities. The results of the experiments show that a significant amount of vulnerabilities has been systematically over-reported for old versions of browsers. Consequently, old versions of software seem to have less vulnerabilities than reported
68

Inference with Distributional Semantic Models

Kruszewski Martel, German David January 2016 (has links)
Distributional Semantic Models have emerged as a strong theoretical and practical approach to model the meaning of words. Indeed, an increasing body of work has proved their value in accounting for a wide range of semantic phenomena. Yet, it is still unclear how we can use the semantic information contained in these representations to support the natural inferences that we produce in our every day usage of natural language. In this thesis, I explore a selection of challenging relations that exemplify these inferential processes. To this end, on one hand, I present new publicly available datasets to allow for their empirical treatment. On the other, I introduce computational models that can account for these relations using distributional representations as their conceptual knowledge repository. The performance of these models demonstrate the feasibility of this approach while leaving room for improvement in future work.
69

From Energy Efficient to Energy Neutral Wireless Sensor Networks

Raza, Usman January 2015 (has links)
Energy autonomy for Wireless Sensor Networks (WSNs) is a key to involve industry stakeholders willing to spend billions on the Internet of Things. By offering the lifetime of only a few years, traditional battery powered WSNs are neither practical nor profitable due to their high maintenance cost. Powering WSNs with energy harvesters can overcome this limitation and increase mean time-to-maintenance to tens of years. However, the primary challenge in realizing an energy neutral operation is to reduce the consumed energy drastically to match with the harvested energy. This dissertation proposes techniques to minimize the overhead of two main activities: communication and sampling. It does so by making a key observation: a plethora of applications can accept low accuracy of sensed phenomenon without sacrificing the application requirements. This fact enables us to reduce consumed energy by radically revising the network stack design, all the way from the application layer to underlying hardware. At the application layer, the relaxed requirements make it possible to propose techniques to reduce the data exchanges among the nodes, the most power hungry operation in WSNs. For example, we propose a simple yet efficient prediction based data collection technique called Derivative-Based Prediction (DBP) that enables data suppression up to 99%. With the remaining ultra-low application data rate, a full system-wide evaluation reveals that the dominating overhead of the lower layers greatly limits the gains enabled by DBP. A cross-layer optimization of the network stack is then designed specifically to strip off the unnecessary overhead to gain one order of magnitude longer lifetime. Although a huge saving in relative terms, the resulting power consumption is still much higher than tens of microwatts, the power usually achievable from a reasonably sized harvester deployed in an indoor environment. Therefore, we consider a novel combination of hardware components to further reduce power consumption. Our work demonstrates that using wake-up receivers along with DBP results in long idle periods with only rare occurrences of power hungry states such as radio transmissions and receptions. Low power modes, provided by various components of the underlying hardware platform, are adopted in the idle periods to conserve energy. In concrete real-world case studies, the lifetime is estimated to improve by two orders of magnitude. Thanks to the software and hardware features proposed above, the overall power consumption is reduced to a point where the sampling cost constitutes a significant portion of it. To reduce the cost of sampling, we introduce the concept of Model-based Sensing in which we push prediction based data collection as close as possible to the hardware sensing elements. This hardware-software co-design results in a system that consumes only a few microwatts, a point where even harvesters deployed in challenging indoor conditions can sustain the operation of nodes. This dissertation advances the state of art on energy efficient WSNs in several dimensions. First, it bridges the gap between theory and practice by providing the first ever system-wide evaluation of prediction based data collection in real-world WSNs. Second, new software based optimizations and novel hardware components are proposed that can deliver three orders of magnitude reduction in power consumption. Third, it provides tools to estimate the harvestable energy in real WSNs. By using these tools, the work highlights that the energy consumed by the proposed mechanisms is indeed lower than the energy harvested. By closing the gap between supply and demand of energy, the dissertation takes a concrete step in the direction of achieving completely energy neutral WSNs.
70

Mechanics and numerical simulations of Dry Granular Flows driven by gravity

Rossi, Giulia January 2018 (has links)
The gravitational granular flows (e.g. debris flows or snow avalanches) are catastrophic and destructive phenomena affecting many areas in the world, and especially the mountain areas of Europe. Proper design criteria are required in order to improve protection structures and prevention strategies. Due to their complex nature, these phenomena present many aspects still unsolved in the research field. This research addresses some aspects of the mechanics of dry granular flows: a 1D depth integrated model has been developed, based on a two phase approach. The system of equations consists of three partial differential equations, derived from the mass balances for the solid and fluid phase and from the momentum balance for the solid phase, and two rheological relations determined through experimental tests and particle numerical simulations. The experimental investigation hes been conducted in a laboratory channel, by recording through high speed cameras the motion of polystyrene spherical particles. Within this research, it has been developed an ad hoc optical method to analyze and process the images recorded, with the aim of defining the main flow characteristics. From a numerical point of view, a path conservative finite volume scheme has been adopted to solve the system of equations previously described: the numerical solution is compared to the experimental results for different configurations, in order to verify the effectiveness of the mode

Page generated in 0.0517 seconds