• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • Tagged with
  • 26
  • 26
  • 26
  • 26
  • 24
  • 24
  • 10
  • 9
  • 9
  • 8
  • 6
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Mathematical models for vector-borne disease: effects of periodic environmental variations.

Moschini, Pamela Mariangela January 2015 (has links)
Firstly, I proposed a very simple SIS/SIR model for a general vector-borne disease transmission considering constant population sizes over the season, where contact between the host and the vector responsible of the transmission is assumed to occur only during the summer of each year. I discussed two different types of threshold for pathogen persistence that I explicitly computed: a "short-term threshold" and a "long-term threshold". Later, I took into account the seasonality of the populations involved in the transmission. For a single season, the model consists of system of non linear differential equations considering the various stages of the infection transmission between the vector and the host population. Assuming the overwintering in the mosquito populations, I simulated the model for several years. Finally, I studied the spatial spread of a vector-borne disease throught an impusive reaction-diffusion model and I showed some simulations.
12

A comparative analysis of the metabolomes of different berry tissues between Vitis vinifera and wild American Vitis species, supported by a computer-assisted identification strategy

Narduzzi, Luca January 2015 (has links)
Grape (Vitis vinifera L.) is among the most cultivated plants in the world. Its origin traces back to the Neolithic era, when the first human communities started to domesticate wild Vitis sylvestris L. grapes to produce wines. Domestication modified Vitis vinifera to assume characteristics imparted from the humans, selecting desired traits (e.g. specific aromas), and excluding the undesired ones. This process made this species very different from all the other wild grape species existing around the world, including its progenitor, Vitis sylvestris. Metabolomics is a field of the sciences that comparatively studies the whole metabolite set of two (or more) groups of samples, to point out the chemical diversity and infer on the variability in the metabolic pathways between the groups. Crude metabolomics observation can be often used for hypotheses generation, which need to be confirmed by further experiments. In my case, starting from the grape metabolome project (Mattivi et al. unpublished data), I had the opportunity to put hands on a huge dataset built on the berries of over 100 Vitis vinifera grape varieties, tens of grape interspecific hybrids and few wild grape species analyzed per four years; all included in a single experiment. Starting from this data handling, I designed specific experiments to confirm the hypotheses generated from the observation of the data, to improve compound identification, to give statistical meaning to the differences, to localize the metabolites in the berries and extrapolate further information on the variability existing among the grape genus. The hypotheses formulated were two: 1) several glyco-conjugated volatiles can be detected, identified and quantified in untargeted reverses-phase liquid chromatography-mass spectrometry; 2) The chemical difference between Vitis vinifera and wild grape berries is wider than reported in literature. Furthermore, handling a huge dataset of chemical standards injected under the same conditions of the sample set, I also formulated a third hypothesis: 3) metabolites with similar chemical structures are more likely to generate similar signals in LC-MS, therefore the combined use of the signals can predict the more likely chemical structure of unknown markers. In the first study (chapter 5), the signals putatively corresponding to glycoconjugated volatiles have been first enclosed in a specific portion of the temporal and spectrometric space of the LC-HRMS chromatograms, then they have been subjected to MS/MS analysis and lastly their putative identity have been confirmed through peak intensity correlation between the signals measured in LC-HRMS and GC-MS. In the second study (chapter 6), a multivariate regression model has been built between LC-HRMS signals and the substructures composing the molecular structure of the compounds and its accuracy and efficacy in substructure prediction have been demonstrated. In the third study (chapter 7), I comparatively studied some wild grapes versus some Vitis vinifera varieties separating the basic components of the grape berry (skin, flesh and seeds), with the aim to identify all the detected metabolites that differentiate the two groups, which determine a difference in quality between the wild versus domesticated grapes, especially regarding wine production.
13

Mathematical modeling for epidemiological inference and public health support

Marziano, Valentina January 2017 (has links)
During the last decades public health policy makers have been increasingly turning to mathematical modeling to support their decisions. This trend has been calling for the introduction of a new class of models that not only are capable to explain qualitatively the dynamics of infectious diseases, but also have the capability to provide quantitatively reliable and accurate results. To this aim models are becoming more and more detailed and informed with data. However, there is still much to be done in order to capture the individual and population features that shape the spread of infectious diseases. This thesis addresses some issues in epidemiological modeling that warrant further investigation. In Chapter 1 we introduce an age-structured individual-based stochastic model of Varicella Zoster Virus (VZV) transmission, whose main novelty is the inclusion of realistic population dynamics over the last century. This chapter represents an attempt to answer the need pointed out by recent studies for a better understanding of the role of demographic processes in shaping the circulation of infectious diseases. In Chapter 2 we use the model for VZV transmission developed in Chapter 1 to evaluate the effectiveness of varicella and HZ vaccination programs in Italy. With a view to the support of public health decisions, the epidemiological model is coupled with a cost-effectiveness analysis. To the best of our knowledge, this work represents the first attempt to evaluate the post-vaccination trends in varicella and HZ, both from an epidemiological and economic perspective, in light of the underlying effect of demographic processes. Another novelty of this study is that we take into account the uncertainty regarding the mechanism of VZV reactivation, by comparing results obtained using two different modeling assumptions on exogenous boosting. In Chapter 3 we retrospectively analyze the spatiotemporal dynamics of the 2009 H1N1 influenza pandemic in England, by using a spatially-explicit model of influenza transmission, accounting for socio-demographic and disease natural history data. The aim of this work is to investigate whether the observed spatiotemporal dynamics of the epidemic was shaped by a spontaneous behavioral response to the pandemic threat. This chapter, represents an attempt to contribute to the challenge of understanding and quantifying the effect of human behavioral changes on the spread of epidemics. In Chapter 4 we investigate the current epidemiology of measles in Italy, by using a detailed computational model for measles transmission, informed with regional heterogeneities in the age-specific seroprevalence profiles. The analysis performed in this chapter tries to fill some of the existing gaps in the knowledge of the epidemiological features of vaccine preventable diseases in frameworks characterized by a low circulation of the virus.
14

Mathematical modelling of emerging and re-emerging infectious diseases in human and animal populations

Dorigatti, Ilaria January 2011 (has links)
The works presented in this thesis are very different one from the other but they all deal with the mathematical modelling of emerging infectious diseases which, beyond being the leitmotiv of this thesis, is an important research area in the field of epidemiology and public health. A minor but significant part of the thesis has a theoretical flavour. This part is dedicated to the mathematical analysis of the competition model between two HIV subtypes in presence of vaccination and cross-immunity proposed by Porco and Blower (1998). We find the sharp conditions under which vaccination leads to the coexistence of the strains and using arguments from bifurcation theory, draw conclusions on the equilibria stability and find that a rather unusual behaviour of histeresis-type might emerge after repeated variations of the vaccination rate within a certain range. The most of this thesis has been inspired by real outbreaks occurred in Italy over the last 10 years and is about the modelling of the 1999-2000 H7N1 avian influenza outbreak and of the 2009-2010 H1N1 pandemic influenza. From an applied perspective, parameter estimation is a key part of the modelling process and in this thesis statistical inference has been performed within both a classical framework (i.e. by maximum likelihood and least square methods) and a Bayesian setting (i.e. by Markov Chain Monte Carlo techniques). However, my contribution goes beyond the application of inferential techniques to specific case studies. The stochastic, spatially explicit, between-farm transmission model developed for the transmission of the H7N1 virus has indeed been used to simulate different control strategies and asses their relative effectiveness. The modelling framework presented here for the H1N1 pandemic in Italy constitutes a novel approach that can be applied to a variety of different infections detected by surveillance system in many countries. We have coupled a deterministic compartmental model with a statistical description of the reporting process and have taken into account for the presence of stochasticity in the surveillance system. We thus tackled some statistical challenging issues (such as the estimation of the fraction of H1N1 cases reporting influenza-like-illness symptoms) that had not been addressed before. Last, we apply different estimation methods usually adopted in epidemiology to real and simulated school outbreaks, in the attempt to explore the suitability of a specific individual-based model at reproducing empirically observed epidemics in specific social contexts.
15

The influence of the inclusion of biological knowledge in statistical methods to integrate multi-omics data

Tini, Giulia January 2018 (has links)
Understanding the relationships among biomolecules and how these relationships change between healthy and disease states is an important question in modern biology and medicine. The advances in high-throughput techniques has led to the explosion of biological data available for analysis, allowing researchers to investigate multiple molecular layers (i.e. omics data) together. The classical statistical methods could not address the challenges of combining multiple data types, leading to the development of ad hoc methodologies, which however depend on several factors. Among those, it is important to consider whether “prior knowledge” on the inter-omics relationships is available for integration. To address this issue, we thus focused on different approaches to perform three-omics integration: supervised (prior knowledge is available), unsupervised and semi-supervised. With the supervised integration of DNA methylation, gene expression and protein levels from adipocytes we observed coordinated significant changes across the three omics in the last phase of adipogenesis. However, in most cases, interactions between different molecular layers are complex and unknown: we explored unsupervised integration methods, showing that their results are influenced by method choice, pre-processing, number of integrated data types and experimental design. The strength of the inter-omics signal and the presence of noise are also proven as relevant factors. Since the inclusion of prior knowledge can highlight the former while decreasing the influence of the latter, we proposed a semi-supervised approach, showing that the inclusion of knowledge about inter-omics interactions increases the accuracy of unsupervised methods when solving the problem of sample classification.
16

Deep neural network models for image classification and regression

Malek, Salim January 2018 (has links)
Deep learning, a branch of machine learning, has been gaining ground in many research fields as well as practical applications. Such ongoing boom can be traced back mainly to the availability and the affordability of potential processing facilities, which were not widely accessible than just a decade ago for instance. Although it has demonstrated cutting-edge performance widely in computer vision, and particularly in object recognition and detection, deep learning is yet to find its way into other research areas. Furthermore, the performance of deep learning models has a strong dependency on the way in which these latter are designed/tailored to the problem at hand. This, thereby, raises not only precision concerns but also processing overheads. The success and applicability of a deep learning system relies jointly on both components. In this dissertation, we present innovative deep learning schemes, with application to interesting though less-addressed topics. In this respect, the first covered topic is rough scene description for visually impaired individuals, whose idea is to list the objects that likely exist in an image that is grabbed by a visually impaired person, To this end, we proceed by extracting several features from the respective query image in order to capture the textural as well as the chromatic cues therein. Further, in order to improve the representativeness of the extracted features, we reinforce them with a feature learning stage by means of an autoencoder model. This latter is topped with a logistic regression layer in order to detect the presence of objects if any. In a second topic, we suggest to exploit the same model, i.e., autoencoder in the context of cloud removal in remote sensing images. Briefly, the model is learned on a cloud-free image pertaining to a certain geographical area, and applied afterwards on another cloud-contaminated image, acquired at a different time instant, of the same area. Two reconstruction strategies are proposed, namely pixel-based and patch-based reconstructions. From the earlier two topics, we quantitatively demonstrate that autoencoders can play a pivotal role in terms of both (i) feature learning and (ii) reconstruction and mapping of sequential data. Convolutional Neural Network (CNN) is arguably the most utilized model by the computer vision community, which is reasonable thanks to its remarkable performance in object and scene recognition, with respect to traditional hand-crafted features. Nevertheless, it is evident that CNN naturally is availed in its two-dimensional version. This raises questions on its applicability to unidimensional data. Thus, a third contribution of this thesis is devoted to the design of a unidimensional architecture of the CNN, which is applied to spectroscopic data. In other terms, CNN is tailored for feature extraction from one-dimensional chemometric data, whilst the extracted features are fed into advanced regression methods to estimate underlying chemical component concentrations. Experimental findings suggest that, similarly to 2D CNNs, unidimensional CNNs are also prone to impose themselves with respect to traditional methods. The last contribution of this dissertation is to develop new method to estimate the connection weights of the CNNs. It is based on training an SVM for each kernel of the CNN. Such method has the advantage of being fast and adequate for applications that characterized by small datasets.
17

The influence of the population contact network on the dynamics of epidemics transmission

Ottaviano, Stefania January 2016 (has links)
In this thesis we analyze the relationship between epidemiology and network theory, starting from the observation that the viral propagation between interacting agents is determined by intrinsic characteristics of the population contact network. We aim to investigate how a particular network structure can impact on the long-term behavior of epidemics. This field is way too large to be fully discussed; we limit ourselves to consider networks that are partitioned into local communities, in order to incorporate realistic contact structures into the model. The gross structure of hierarchical networks of this kind can be described by a quotient graph. The rationale of this approach is that individuals infect those belonging to the same community with higher probability than individuals in other communities. We describe the epidemic process as a continuous-time individual-based susceptible–infected–susceptible (SIS) model using a first-order mean-field approximation, both in homogeneous and in heterogeneous setting. For this mean-field model we show that the spectral radius of the smaller quotient graph, in connection with the infecting and curing rates, is related to the epidemic threshold, and it gives conditions in order to decide whether the overall healthy-state defines a globally asymptotically stable or an unstable equilibrium. Moreover we show that above the threshold another steady-state exists that can be computed using a lower-dimensional dynamical system associated with the evolution of the process on the quotient graph. Our investigations are based on the graph-theoretical notion of equitable partition and of its recent and rather flexible generalization, that of almost equitable partition. We also consider the important issue related to the control of the infectious disease. Taking into account the connectivity of the network, we provide a cost-optimal distribution of resources to prevent the disease from persisting indefinitely in the population; for a particular case of two-level immunization problem we report on the construction of a polynomial time complexity algorithm. In the second part of the thesis we include stochasticity in the model, considering the infection rates in the form of independent stochastic processes. This allows us to get stochastic differential equation for the probability of infection in each node. We report on the existence of the solution for all times. Moreover we show that there exist two regions, given in terms of the coefficients of the model, one where the system goes to extinction almost surely, and the other where it is stochastic permanent.
18

Socially aware motion planning of assistive robots in crowded environments

Colombo, Alessio January 2015 (has links)
People with impaired physical or mental ability often find it challenging to negotiate crowded or unfamiliar environments, leading to a vicious cycle of deteriorating mobility and sociability. In particular, crowded environments pose a challenge to the comfort and safety of those people. To address this issue we present a novel two-level motion planning framework to be embedded efficiently in portable devices. At the top level, the long term planner deals with crowded areas, permanent or temporary anomalies in the environment (e.g., road blocks, wet floors), and hard and soft constraints (e.g., "keep a toilet within reach of 10 meters during the journey", "always avoid stairs"). A priority tailored on the user's needs can also be assigned to the constraints. At the bottom level, the short term planner anticipates undesirable circumstances in real time, by verifying simulation traces of local crowd dynamics against temporal logical formulae. The model takes into account the objectives of the user, preexisting knowledge of the environment and real time sensor data. The algorithm is thus able to suggest a course of action to achieve the user’s changing goals, while minimising the probability of problems for the user and other people in the environment. An accurate model of human behaviour is crucial when planning motion of a robotic platform in human environments. The Social Force Model (SFM) is such a model, having parameters that control both deterministic and stochastic elements. The short term planner embeds the SFM in a control loop that determines higher level objectives and reacts to environmental changes. Low level predictive modelling is provided by the SFM fed by sensors; high level logic is provided by statistical model checking. To parametrise and improve the short term planner, we have conducted experiments to consider typical human interactions in crowded environments. We have identified a number of behavioural patterns which may be explicitly incorporated in the SFM to enhance its predictive power. To validate our hierarchical motion planner we have run simulations and experiments with elderly people within the context of the DALi European project. The performance of our implementation demonstrates that our technology can be successfully embedded in a portable device or robot.
19

Theoretical and Algorithmic Solutions for Null models in Network Theory

Gobbi, Andrea January 2013 (has links)
The graph-theoretical based formulation for the representation of the data-driven structure and the dynamics of complex systems is rapidly imposing as the paramount paradigm [1] across a variety of disciplines, from economics to neuroscience, with biological -omics as a major example. In this framework, the concept of Null Model borrowed from the statistical sciences identifies the elective strategy to obtain a baseline points of modelling comparison [2]. Hereafter, a null model is a graph which matches one specific graph in terms of some structural features, but which is otherwise taken to be generated as an instance of a random network. In this view, the network model introduced by Erdos & Renyi [3], where random edges are generated as independently and identically distributed Bernoulli trials, can be considered the simplest possible null model. In the following years, other null models have been developed in the framework of graph theory, with the detection of the community structure as one of the most important target[4]. In particular, the model described in [5] introduces the concept of a randomized version of the original graph: edges are rewired at random, with each expected vertex degree matching the degree of the vertex in the original graph. Although aimed at building a reference for the community detection, this approach will play a key role in one of the model considered in this thesis. Note that, although being the ï¬ rst problem to be considered, designing null models for the community structures detection is still an open problem [6, 7]. Real world applications of null model in graph theory have also gained popularity in many different scientific areas, with ecology as the ï¬ rst example: see [8] for a comprehensive overview. More recently, interest for network null models arose also in computational biology [9, 10], geosciences [11] and economics [12, 13], just to name a few. In the present work the theoretical design and the practical implementation of a series of algorithms for the construction of null models will be introduced, with applications ranging from functional genomics to game theory for social studies. The four chapters devoted to the presentation of the examples of null model are preceded by an introductory chapter including a quick overview of graph theory, together with all the required notations. The ï¬ rst null model is the topic of the second chapter, where a suite of novel algorithms is shown, aimed at the efficient generation of complex networks under different constraints on the node degrees. Although not the most important example in the thesis, the premiment position dedicated to this topic is due to its strict familiarity with the aforementioned classical null models for random graph construction. Together with the algorithms definition and examples, a thorough theoretical analysis of the proposed solutions is shown, highlighting the improvements with respect to the state-of-the-art and the occurring limitations. Apart from its intrinsic mathematical value, the interest for these algorithms by the community of systems biology lies in the need for benchmark graphs resembling the real biological networks. They are in fact of uttermost importance when testing novel inference methods, and as testbeds for the network reconstruction challenges such as the DREAM series [14, 15, 16]. The following Chapter three includes the most complex application of null models presented in this thesis. The scientific workï¬ eld is again functional genomics, namely the combinatorial approach to the modelling of patterns of mutations in cancer as detected by Next Generation Sequencing exome Data. This problem has a natural mathematical representation in terms of rewiring of bipartite networks and mutual-exclusively mutated modules [17, 18], to which Markov chain updates (switching-steps) are applied through a Switching Algorithm SA. Here we show some crucial improvements to the SA, we analytically derive an approximate lower bound for the number of steps required, we introduce BiRewire, an R package implementing the improved SA and we demonstrate the effectiveness of the novel solution on a breast cancer dataset. A novel threshold-selection method for the construction of co-expression net- works based on the Pearson coefficient is the third and last biological example of null model, and it is outlined in Chapter four. Gene co-expression networks inferred by correlation from high-throughput proï¬ ling such as microarray data represent a simple but effective technique for discovering and interpreting linear gene relationships. In the last years several approach have been proposed to tackle the problem of deciding when the resulting correlation values are statistically significant. This is mostly crucial when the number of samples is small, yielding a non negligible chance that even high correlation values are due to random effects. Here we introduce a novel hard thresholding solution based on the assumption that a coexpression network inferred by randomly generated data is expected to be empty. The theoretical derivation of the new bound by geometrical methods is shown together with two applications in oncogenomics. The last two chapters of the thesis are devoted to the presentation of null models in non-biological contexts. In Chapter 5 a novel dynamic simulation model is introduced mimicking a random market in which sellers and buyers follow different price distributions and matching functions. The random marked is mathematically formulated by a dynamic bipartite graph, and the analytical formula for the evolution along time of the mean price exchange is derived, together with global likelihood function for retrieving the initial parameters under different assumptions. Finally in Chapter 6 we describe how graph tools can be used to model abstraction and strategy (see [19, 20, 21]) for a class of games in particular the TTT solitaire. We show that in this solitaire it is not possible to build an optimal (in the sense of minimum number of moves) strategy dividing the big problems into smaller subproblems. Nevertheless, we ï¬ nd some subproblems and strategies for solving the TTT solitaire with a negligible increment in the number of moves. Although quite simple and far from simulating highly complex real-world situations of decision making, the TTT solitaire is an important tool for starting the exploration of the social analysis of the trajectories of the implementation of winning strategies through different learning procedures [22].
20

Feynman path integral for Schrödinger equation with magnetic field

Cangiotti, Nicolò 14 February 2020 (has links)
Feynman path integrals introduced heuristically in the 1940s are a powerful tool used in many areas of physics, but also an intriguing mathematical challenge. In this work we used techniques of infinite dimensional integration (i.e. the infinite dimensional oscillatory integrals) in two different, but strictly connected, directions. On the one hand we construct a functional integral representation for solutions of a general high-order heat-type equations exploiting a recent generalization of infinite dimensional Fresnel integrals; in this framework we prove a a Girsanov-type formula, which is related, in the case of Schrödinger equation, to the Feynman path integral representation for the solution in presence of a magnetic field; eventually a new phase space path integral solution for higher-order heat-type equations is also presented. On the other hand for the three dimensional Schrödinger equation with magnetic field we provide a rigorous mathematical Feynman path integral formula still in the context of infinite dimensional oscillatory integrals; moreover, the requirement of independence of the integral on the approximation procedure forces the introduction of a counterterm, which has to be added to the classical action functional (this is done by the example of a linear vector potential). Thanks to that, it is possible to give a natural explanation for the appearance of the Stratonovich integral in the path integral formula for both the Schrödinger and the heat equation with magnetic field.

Page generated in 0.1533 seconds