• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 110
  • 24
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 144
  • 73
  • 57
  • 55
  • 52
  • 49
  • 33
  • 29
  • 27
  • 22
  • 22
  • 17
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Diseño en estructuras urbanas informales

Fernández Reyna, Miguel 09 May 2008 (has links)
El diseño en estructuras urbanas informales se plantea desde la dicotomía que presentan las ciudades del tercer mundo. La ciudad convencional que se enmarca dentro de los mecanismos normativos y legales, se ha visto superada por el crecimiento de una ciudad paralela que se desarrolla de manera espontánea al margen de los planes metropolitanos establecidos. La dicotomía se presenta con una realidad urbana preconcebida y otra improvisada. Una sujeta a grandes proyectos urbanos y a la industria de la construcción, y la otra atiende a las necesidades básicas respondiendo palmariamente al hecho constructivo. El diseño en estructuras urbanas informales aborda esa brecha que existe ente la ciudad formal y la ciudad informal.La total incorporación urbana pretendida en los procesos de Habilitación Física de Asentamientos Informales, comienza por asumir los requerimientos infraestructurales en términos numéricos y estadísticos, es decir; cantidad de unidades de viviendas de sustitución, longitud de acueductos, cloacas y alumbrado, área destinada a la nueva vialidad, número de centros comunitarios, asistenciales, deportivos, y demás especificaciones que componen el programa de un proyecto. Sin embargo, hay otra exigencia de igual importancia aunque menos mecánica o explícita, que demanda un tipo de cuidado sensible con el contexto y atento a la oportunidad. Una exigencia que consiste en aprovechar las potencialidades que ofrece la situación urbana tal y como se presenta en la realidad, y en buscar reconocer los valores propios de contexto informal, dejarles espacio y potenciarlos.La tesis toma como caso de estudio a los asentamientos informales en Caracas Venezuela que fueron objeto de diversos proyectos de Habilitación Física promovidos por diversos organismos internacionales y nacionales, bajo la metodología propuesta por el Banco Mundial. Tres figuras conceptuales recorren las diferentes escalas contempladas en los niveles de actuación de diseño urbano y arquitectónico.El concepto del límite atiende a la escala metropolitana y se asumen que el objeto principal de los proyectos de Habilitación Física de Asentamientos Informales persigue a la total incorporación infraestructural de un sector informal desprovisto de servicios básicos en el contexto urbano de la ciudad formal. Se sostiene que toda voluntad que pretenda una total incorporación urbana debe atender a los niveles de integración espacial determinados por los análisis sintácticos. Mientras los niveles de segregación espacial demuestren la desproporción revelada en los análisis espaciales, los niveles socio económicos supondrán una consecuente desproporción. La optimización de los proyectos infraestructurales comienza por elevar sustancialmente los niveles de integración espacial, y se sugiere que los límites físicos que dividen y excluyen a la ciudad informal de la ciudad formal contienen un importante potencial como factor de integración.A partir del análisis de un proyecto de Habilitación física se aborda la mediana escala del diseño urbano en el espacio público. Se sostiene que el proyecto integral en todos sus niveles de actuación puede responder al desafío de cualificar el diseño infraestructural en los procesos de Habilitación Física. Un desafío arquitectónico que plantea proporcionar nueva infraestructura y preservar las cualidades socio-físicas contenidas en la propia naturaleza orgánica del espacio informal. Se trata pues de construir un verdadero espacio urbano informal dotado de infraestructura y cargado de su propia porosidad genética que entremezcla formas y funciones en un espacio público.Por último el concepto de cromatismo atiende al diseño arquitectónico requerido en el ámbito de la pequeña escala del objeto arquitectónico. Una conclusión general del cromatismo contenido en esta pequeña escala no puede considerarse sino a partir del diseño como proceso. En primer lugar hay que recordar la importante participación comunitaria requerida en estos procesos. Y en segundo lugar se debe resaltar la importancia de lo que significa hacer visible el proceso de diseño y el proceso constructivo en el objeto edificado. La arquitectura que utilice los propios recursos informales se tiene que dedicar a conservar y a plasmar e proceso de diseño arquitectónico en un resultado final que en definitiva será solo un fragmento de otro proceso más grande.Este tesis ha perseguido la esencia de un fenómeno informa muchas veces despreciado como poco culto o poco interesante para la arquitectura, y ha determinado valores inherentes que pueden servir como herramientas para cualificar los proyectos de intervención física sobre el contexto urbano informal.
72

Tecnologie di sequenziamento massivo e genomica: approfondimenti nella specie bovina / HIGH-THROUGHPUT SEQUENCING TECHNOLOGIES AND GENOMICS: INSIGHTS INTO THE BOVINE SPECIES

MILANESI, MARCO 28 January 2015 (has links)
Nel corso dell’ultimo secolo, i programmi di miglioramento genetico hanno portato a notevoli progressi nelle razze bovine nonostante le conoscenze scarse o assenti relative ai geni coinvolti e alle loro funzioni. In questa tesi il genoma bovino è stato studiato con tecnologie massive, impiegando metodiche d’analisi sia tradizionali sia innovative per identificare i geni che controllano i fenotipi complessi e dare supporto al sistema allevatoriale. Nella prima parte del lavoro pannelli SNP a media densità sono stati utilizzati per l’individuazione di ”selection signature” condivise tra razze bovine da latte o da carne, identificando geni candidati specifici per l’attitudine produttiva, e di regioni associate ai fenotipi produttivi in razze da latte. L’associazione è stata effettuata sia con una regressione classica sia con un approccio “gene-centrico” innovativo. Regioni e geni associati significativamente ai fenotipi legati alla produzione lattea sono risultati essere razza specifici. Nella seconda parte, i dati dal sequenziamento dell’esoma e da pannelli SNP ad alta densità sono stati combinati per identificare mutazioni deleterie nella razza Frisona. Diversi approcci sono stati combinati per filtrare e ordinare le varianti genetiche. Alcuni geni che controllano meccanismi biologici di base, quali la fertilità e lo sviluppo, sono stati identificati come candidati ad essere deleteri. Per queste indagini sono stati utilizzati alcuni strumenti bioinformatici già disponibili e, quando necessario, sono stati sviluppati nuovi approcci e procedure. / In the last century, advanced breeding methods have increased the rate of genetic gain in cattle but, with a few exceptions, genes and molecular functions underlying phenotypic variation are still largely unknown. In this thesis, the bovine genome was studied with high-throughput technologies using established and innovative procedures to search for genes controlling complex traits and support bovine breeding. In the first part a medium density marker panel was used to detect selection signatures shared by dairy or by beef breeds, identify candidate genes for specific production aptitudes, and genomic regions associated to production traits in dairy cattle. Genome wide association was run using a classic regression and an innovative gene-centric method. Regions and genes significantly associated to milk traits were specific for each breed. In a second part, data from exome sequences and high-density marker panels were combined to identify deleterious mutations in Italian Holstein. Different approaches were combined to filter and prioritize genetic variants. A set of candidate deleterious genes were found, that control basic biological mechanisms such as development and fertility. State of the art bioinformatics tools were used in these investigations and, whenever necessary, new pipelines and approaches were developed.
73

Impatto e dialogo delle strade nazionali con l'ambiente : evoluzione lungo i trent'anni di realizzazione nel territorio ticinese /

Sailer, Giorgio. January 1990 (has links)
Diss. rer. pol. Bern, 1989. / Bibliogr.
74

Theoretical and Algorithmic Solutions for Null models in Network Theory

Gobbi, Andrea January 2013 (has links)
The graph-theoretical based formulation for the representation of the data-driven structure and the dynamics of complex systems is rapidly imposing as the paramount paradigm [1] across a variety of disciplines, from economics to neuroscience, with biological -omics as a major example. In this framework, the concept of Null Model borrowed from the statistical sciences identifies the elective strategy to obtain a baseline points of modelling comparison [2]. Hereafter, a null model is a graph which matches one specific graph in terms of some structural features, but which is otherwise taken to be generated as an instance of a random network. In this view, the network model introduced by Erdos & Renyi [3], where random edges are generated as independently and identically distributed Bernoulli trials, can be considered the simplest possible null model. In the following years, other null models have been developed in the framework of graph theory, with the detection of the community structure as one of the most important target[4]. In particular, the model described in [5] introduces the concept of a randomized version of the original graph: edges are rewired at random, with each expected vertex degree matching the degree of the vertex in the original graph. Although aimed at building a reference for the community detection, this approach will play a key role in one of the model considered in this thesis. Note that, although being the ï¬ rst problem to be considered, designing null models for the community structures detection is still an open problem [6, 7]. Real world applications of null model in graph theory have also gained popularity in many different scientific areas, with ecology as the ï¬ rst example: see [8] for a comprehensive overview. More recently, interest for network null models arose also in computational biology [9, 10], geosciences [11] and economics [12, 13], just to name a few. In the present work the theoretical design and the practical implementation of a series of algorithms for the construction of null models will be introduced, with applications ranging from functional genomics to game theory for social studies. The four chapters devoted to the presentation of the examples of null model are preceded by an introductory chapter including a quick overview of graph theory, together with all the required notations. The ï¬ rst null model is the topic of the second chapter, where a suite of novel algorithms is shown, aimed at the efficient generation of complex networks under different constraints on the node degrees. Although not the most important example in the thesis, the premiment position dedicated to this topic is due to its strict familiarity with the aforementioned classical null models for random graph construction. Together with the algorithms definition and examples, a thorough theoretical analysis of the proposed solutions is shown, highlighting the improvements with respect to the state-of-the-art and the occurring limitations. Apart from its intrinsic mathematical value, the interest for these algorithms by the community of systems biology lies in the need for benchmark graphs resembling the real biological networks. They are in fact of uttermost importance when testing novel inference methods, and as testbeds for the network reconstruction challenges such as the DREAM series [14, 15, 16]. The following Chapter three includes the most complex application of null models presented in this thesis. The scientific workï¬ eld is again functional genomics, namely the combinatorial approach to the modelling of patterns of mutations in cancer as detected by Next Generation Sequencing exome Data. This problem has a natural mathematical representation in terms of rewiring of bipartite networks and mutual-exclusively mutated modules [17, 18], to which Markov chain updates (switching-steps) are applied through a Switching Algorithm SA. Here we show some crucial improvements to the SA, we analytically derive an approximate lower bound for the number of steps required, we introduce BiRewire, an R package implementing the improved SA and we demonstrate the effectiveness of the novel solution on a breast cancer dataset. A novel threshold-selection method for the construction of co-expression net- works based on the Pearson coefficient is the third and last biological example of null model, and it is outlined in Chapter four. Gene co-expression networks inferred by correlation from high-throughput proï¬ ling such as microarray data represent a simple but effective technique for discovering and interpreting linear gene relationships. In the last years several approach have been proposed to tackle the problem of deciding when the resulting correlation values are statistically significant. This is mostly crucial when the number of samples is small, yielding a non negligible chance that even high correlation values are due to random effects. Here we introduce a novel hard thresholding solution based on the assumption that a coexpression network inferred by randomly generated data is expected to be empty. The theoretical derivation of the new bound by geometrical methods is shown together with two applications in oncogenomics. The last two chapters of the thesis are devoted to the presentation of null models in non-biological contexts. In Chapter 5 a novel dynamic simulation model is introduced mimicking a random market in which sellers and buyers follow different price distributions and matching functions. The random marked is mathematically formulated by a dynamic bipartite graph, and the analytical formula for the evolution along time of the mean price exchange is derived, together with global likelihood function for retrieving the initial parameters under different assumptions. Finally in Chapter 6 we describe how graph tools can be used to model abstraction and strategy (see [19, 20, 21]) for a class of games in particular the TTT solitaire. We show that in this solitaire it is not possible to build an optimal (in the sense of minimum number of moves) strategy dividing the big problems into smaller subproblems. Nevertheless, we ï¬ nd some subproblems and strategies for solving the TTT solitaire with a negligible increment in the number of moves. Although quite simple and far from simulating highly complex real-world situations of decision making, the TTT solitaire is an important tool for starting the exploration of the social analysis of the trajectories of the implementation of winning strategies through different learning procedures [22].
75

Modeling the interaction of light with photonic structures by direct numerical solution of Maxwell's equations

Vaccari, Alessandro January 2015 (has links)
The present work analyzes and describes a method for the direct numerical solution of the Maxwell's equations of classical electromagnetism. This is the FDTD (Finite-Difference Time-Domain) method, along with its implementation in an "in-house" computing code for large parallelized simulations. Both are then applied to the modelization of photonic and plasmonic structures interacting with light. These systems are often too complex, either geometrically and materially, in order to be mathematically tractable and an exact analytic solution in closed form, or as a series expansion, cannot be obtained. The only way to gain insight on their physical behavior is thus to try to get a numerical approximated, although convergent, solution. This is a current trend in modern physics because, apart from perturbative methods and asymptotic analysis, which represent, where applicable, the typical instruments to deal with complex physico-mathematical problems, the only general way to approach such problems is based on the direct approximated numerical solution of the governing equations. Today this last choice is made possible through the enormous and widespread computational capabilities offered by modern computers, in particular High Performance Computing (HPC) done using parallel machines with a large number of CPUs working concurrently. Computer simulations are now a sort of virtual laboratories, which can be rapidly and costless setup to investigate various physical phenomena. Thus computational physics has become a sort of third way between the experimental and theoretical branches. The plasmonics application of the present work concerns the scattering and absorption analysis from single and arrayed metal nanoparticles, when surface plasmons are excited by an impinging beam of light, to study the radiation distribution inside a silicon substrate behind them. This has potential applications in improving the eciency of photovoltaic cells. The photonics application of the present work concerns the analysis of the optical reflectance and transmittance properties of an opal crystal. This is a regular and ordered lattice of macroscopic particles which can stops light propagation in certain wavelenght bands, and whose study has potential applications in the realization of low threshold laser, optical waveguides and sensors. For these latters, in fact, the crystal response is tuned to its structure parameters and symmetry and varies by varying them. The present work about the FDTD method represents an enhacement of a previous one made for my MSc Degree Thesis in Physics, which has also now geared toward the visible and neighboring parts of the electromagnetic spectrum. It is organized in the following fashion. Part I provides an exposition of the basic concepts of electromagnetism which constitute the minimum, although partial, theoretical background useful to formulate the physics of the systems here analyzed or to be analyzed in possible further developments of the work. It summarizes Maxwell's equations in matter and the time domain description of temporally dispersive media. It addresses also the plane wave representation of an electromagnetic field distribution, mainly the far field one. The Kirchhoff formula is described and deduced, to calculate the angular radiation distribution around a scatterer. Gaussian beams in the paraxial approximation are also slightly treated, along with their focalization by means of an approximated diraction formula useful for their numericall FDTD representation. Finally, a thorough description of planarly multilayered media is included, which can play an important ancillary role in the homogenization procedure of a photonic crystal, as described in Part III, but also in other optical analyses. Part II properly concerns the FDTD numerical method description and implementation. Various aspects of the method are treated which globally contribute to a working and robust overall algorithm. Particular emphasis is given to those arguments representing an enhancement of previous work.These are: the analysis from existing literature of a new class of absorbing boundary conditions, the so called Convolutional-Perfectly Matched Layer, and their implementation; the analysis from existing literature and implementation of the Auxiliary Differential Equation Method for the inclusion of frequency dependent electric permittivity media, according to various and general polarization models; the description and implementation of a "plane wave injector" for representing impinging beam of lights propagating in an arbitrary direction, and which can be used to represent, by superposition, focalized beams; the parallelization of the FDTD numerical method by means of the Message Passing Interface (MPI) which, by using the here proposed, suitable, user dened MPI data structures, results in a robust and scalable code, running on massively parallel High Performance Computing Machines like the IBM/BlueGeneQ with a core number of order 2X10^5. Finally, Part III gives the details of the specific plasmonics and photonics applications made with the "in-house" developed FDTD algorithm, to demonstrate its effectiveness. After Chapter 10, devoted to the validation of the FDTD code implementation against a known solution, Chapter 11 is about plasmonics, with the analytical and numerical study of single and arrayed metal nanoparticles of different shapes and sizes, when surface plasmon are excited on them by a light beam. The presence of a passivating embedding silica layer and a silicon substrate are also included. The next Chapter 12 is about the FDTD modelization of a face-cubic centered (FCC) opal photonic crystal sample, with a comparison between the numerical and experimental transmittance/reflectance behavior. An homogenization procedure is suggested of the lattice discontinuous crystal structure, by means of an averaging procedure and a planarly multilayered media analysis, through which better understand the reflecting characteristic of the crystal sample. Finally, a procedure for the numerical reconstruction of the crystal dispersion banded omega-k curve inside the first Brillouin zone is proposed. Three Appendices providing details about specific arguments dealt with during the exposition conclude the work.
76

Exponential integrators: tensor structured problems and applications

Cassini, Fabio 21 April 2023 (has links)
The solution of stiff systems of Ordinary Differential Equations (ODEs), that typically arise after spatial discretization of many important evolutionary Partial Differential Equations (PDEs), constitutes a topic of wide interest in numerical analysis. A prominent way to numerically integrate such systems involves using exponential integrators. In general, these kinds of schemes do not require the solution of (non)linear systems but rather the action of the matrix exponential and of some specific exponential-like functions (known in the literature as φ-functions). In this PhD thesis we aim at presenting efficient tensor-based tools to approximate such actions, both from a theoretical and from a practical point of view, when the problem has an underlying Kronecker sum structure. Moreover, we investigate the application of exponential integrators to compute numerical solutions of important equations in various fields, such as plasma physics, mean-field optimal control and computational chemistry. In any case, we provide several numerical examples and we perform extensive simulations, eventually exploiting modern hardware architectures such as multi-core Central Processing Units (CPUs) and Graphic Processing Units (GPUs). The results globally show the effectiveness and the superiority of the different approaches proposed.
77

Mathematical models for host-parasitoid interactions and biological control of Drosophila suzukii

Pfab, Ferdinand January 2017 (has links)
This thesis treats mathematical models for host-parasitoid interactions. It is composed of three parts. In the first part, a class of such models is analyzed theoretically. It focuses on the phenomena of multiple coexistence equilibria of competing parasitoid species. The second part is about a model for determining how a parasitoid release should be timed to optimally control the invasive fruit fly Drosophila suzukii. The third part analyzes an experiment for releasing parasitoids in a greenhouse which is infested by D.suzukii. The models presented are used to discuss how to improve such biological control strategies.
78

Numerical Methods for Optimal Control Problems with Application to Autonomous Vehicles

Frego, Marco January 2014 (has links)
In the present PhD thesis an optimal problem suite is proposed as benchmark for the test of numerical solvers. The problems are divided in four categories, classic, singular, constrained and hard problems. Apart from the hard problems, where it is not possible to give the analytical solution but only some details, all other problems are supplied with the derivation of the solution. The exact solution allows a precise comparison of the performance of the considered software. All of the proposed problems were taken from published papers or books, but it turned out that an analytic exact solution was only rarely provided, thus a true and reliable comparison among numerical solvers could not be done before. A typical wrong conclusion when a solver obtains a lower value of the target functional with respect to other solvers is to claim it better than the others, but it is not recognized that it has only underestimated the true value. In this thesis, a cutting edge application of optimal control to vehicles is showed: the optimization of the lap time in a race circuit track considering a number of realistic constraints. A new algorithm for path planning is completely described for the construction of a quasi G2 fitting of the GPS data with a clothoid spline in terms of the G1 Hermite interpolation problem. In particular the present algorithm is proved to work better than state of the art algorithms in terms of both efficiency and precision.
79

The importance of climatic and ecological factors for vector-borne infections: Culex pipiens and West Nile virus

Marini, Giovanni January 2017 (has links)
About three quarters of human emerging infectious diseases are caused by zoonotic pathogens, and many of them are spread by vectors such as mosquitoes. Mathematical models nowadays represent very powerful tools to make investigations and predictions for biological dynamical systems, providing helpful insights that can be extremely valuable for several aims. In this thesis, we will focus on a particular mosquito-borne zoonosis, West Nile virus (WNV), a flavivirus of emerging public health relevance in Europe and North America, and its main European vector, Culex pipiens mosquitoes. As the transmission of mosquito-borne diseases is largely driven by the abundance of the vector, to design appropriate control strategies it is crucial to understand the population dynamics of existing vector populations and evaluate how it depends on biotic and environmental factors. This thesis presents some new mathematical models that provide insights on several aspects of mosquito population dynamics by using different statistical and computational approaches, including for instance Linear Models and Markov chain Monte Carlo technique. Specifically, they aim to study the effect of biotic and abiotic factors on Cx. pipiens dynamics by using adult mosquito trapping data, gathered over several years in Northern Italy, to feed theoretical models. Furthermore, the effects of host competition and vector feeding preferences on the dynamics of a vector-borne infection (such as WNV) are investigated through a more theoretical study.
80

Some optimal visiting problems: from a single player to a mean-field type model

Marzufero, Luciano 19 July 2022 (has links)
In an optimal visiting problem, we want to control a trajectory that has to pass as close as possible to a collection of target points or regions. We introduce a hybrid control-based approach for the classic problem where the trajectory can switch between a group of discrete states related to the targets of the problem. The model is subsequently adapted to a mean-field game framework, that is when a huge population of agents plays the optimal visiting problem with a controlled dynamics and with costs also depending on the distribution of the population. In particular, we investigate a single continuity equation with possible sinks and sources and the field possibly depending on the mass of the agents. The same problem is also studied on a network framework. More precisely, we study a mean-field game model by proving the existence of a suitable definition of an approximated mean-field equilibrium and then we address the passage to the limit.

Page generated in 0.085 seconds