• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 90
  • 17
  • 11
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 128
  • 99
  • 69
  • 69
  • 69
  • 68
  • 67
  • 36
  • 36
  • 35
  • 34
  • 34
  • 33
  • 33
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Role of the Virtual Stakeholders in the Search of a Balance between Environment, Economy and Society in the Policy Choices Management

Tomasello, Bruno <1985> January 1900 (has links)
The PhD Thesis develops a pre-quantitative methodology to define the direct and indirect effects of policy choices concerning the environment. The main purpose has been to contribute to the solution of some of the problems affecting expert-based judgements: in particular, the dependence on the observer’s goals and beliefs (Reflexivity) and the possibility of making prevalent and accepted an extraneous and non-sustainable solution (Perspectivity). "Policy Choices Analysis and Synthesis System 42" (PoChASSy42) methodology provides a framework which describes the environment through all the possible interactions between: "Individuals Interests", identified as the normative values of the Universal Declaration of the Human Rights; "Collective Interests" identified through the economic activities classified by the NACE system of EuroStat; "Ecosystem Services", identified through the Millennium Ecosystem Assessment studies. The elements which make up these spheres of interests are considered as virtual stakeholders, i.e. the representatives of the existence, intentions, motivations and interests that make up the environment. Both the virtual stakeholders and the possible interactions among them are represented in the PoChASSy42 through the development of an Adjacency Matrix and a Narrative Structure (graph). Using these tools, the expert can: find the virtual stakeholders involved in a given policy choice and exclude, declaring the motivations, those which are not present in the particular geographic context. A final report with the results allows for a transparent communication, which is fundamental in order to further counteract the side effects (Perspectivity and Reflectivity) of a reductionist vision of the environment and to accept the challenge of complexity.
42

Metodi predittivi per Adaptive Radiation Theraphy: effetti del movimento d'organo, degli algoritmi di registrazione deformabile e dell'accumulo di dose / Predictive methods for Adaptive Radiation Therapy: effects of the organ motion, of the deformable registration algorithms and of the dose accumulation.

Guidi, Gabriele <1974> 06 April 2016 (has links)
Il lavoro di ricerca è finanziato dal Ministero della Salute - Bando Giovani Ricercatori 2010 MoH (GR-2010-2318757) “Dose warping methods for IGRT and Adaptive RT: dose accumulation based on organ motion and anatomical variations of the patients during radiation therapy treatments”. La ricerca ha sviluppato metodi predittivi per Adaptive Radiation Therapy. Il paziente è soggetto a macro-micro variazioni anatomiche intra-inter frazione e funzionali durante le fasi di preparazione del piano terapeutico e la ripetitività durante le sedute di radioterapia sono affette da fattori quali movimento d’organo e variazione morfologica, che possono influenzare il programma terapeutico. I sistemi avanzati di calcolo e accumulo di dose consentono la registrazione delle dosi erogate, tenendo in considerazione variazioni locali e globali. Il ricorso a tecnologiche e risorse umane infinite per verificare, istante per istante, la dose erogata a un singolo paziente, sarebbero impensabili nella pratica clinica. I modelli predittivi mediante reti neurali o epidemiologici contribuiscono al monitoraggio del paziente, mediante metodi fisici e statistici. Sono stati valutati fattori di movimento d’organo, le variazioni anatomiche, la deformazione delle immagini e l’accumulo di dose come principali elementi ed effetti nell’utilizzo di metodi predittivi. Questi fattori richiedono la validazione e sviluppo, mediante analisi Bayesiane e misure sperimentali che confermino qualità e accuratezza degli algoritmi. Lo sviluppo e utilizzo di metodiche di dose accumulation e la verifica delle dosi erogate, considerando movimento e deformazione, ha portato allo sviluppo di prototipi robotici, per dosimetria in-vivo e valutazione del movimento, mediante LEGO®. Lo sviluppo delle reti neurali, delle metodiche epidemiologiche e di Support Vector Machine ha consentito di estendere, mediante un progetto di data-mining, le metodologie a centri di livello nazionale. La ricerca mostra le criticità dei metodi predittivi dimostrando l’efficienza delle reti neurali e modelli epidemiologici, nei trattamenti avanzati del tumore della testa e collo, prostata, pancreas e polmone. / The research is funded by the Ministry of Health - Young Researchers in 2010 MoH (GR-2010-2318757) "Dose warping methods for IGRT and Adaptive RT: dose accumulation based on organ motion and anatomical variations of the patients during radiation therapy treatments". The research has developed predictive methods for Adaptive Radiation Therapy. The patient is subjected to macro-micro anatomical and functional variations during the preparation of the treatment plan and during radiation therapy treatments. They are affected by factors such as organs movement and morphological variations, which may influence the therapeutic program. The advanced systems for dose calculation and accumulation allow the recording of doses delivered, taking into account local and global variations. The use of technological and human resources endless to check, moment by moment, the dose delivered to an individual patient, would be unthinkable in clinical practice. Predictive models using neural networks or epidemiological contribute to the monitoring of the patient by physical and statistics methods. Were evaluated factors of organ motion, anatomical variations, deformable registration and dose accumulation as principal elements and effects in the use of predictive methods. These factors require validation and development, using Bayesian analysis and experimental measurements that confirm the quality and accuracy of the algorithms. The development and use of methods of dose accumulation and verification of doses delivered, considering movement and deformation, has led to the development of robotic prototypes for in-vivo dosimetry and evaluation of the movement, using LEGO. The development of neural networks, of epidemiological methods and Support Vector Machine have allowed extending, by means of a data-mining project, the methodologies in nationwide centers. Research shows the criticality of predictive methods proving the efficiency of neural networks and epidemiological models, for advanced treatments of the head and neck, prostate, pancreas and lung cancer.
43

Sviluppo di tecniche per la progettazione delle reti di monitoraggio della qualita' dell'aria / Development of methodologies for the design of air quality monitoring networks

Marinello, Samuele <1983> 05 May 2014 (has links)
L’obiettivo del lavoro consiste nell’implementare una metodologia operativa volta alla progettazione di reti di monitoraggio e di campagne di misura della qualità dell’aria con l’utilizzo del laboratorio mobile, ottimizzando le posizioni dei dispositivi di campionamento rispetto a differenti obiettivi e criteri di scelta. La revisione e l’analisi degli approcci e delle indicazioni fornite dalla normativa di riferimento e dai diversi autori di lavori scientifici ha permesso di proporre un approccio metodologico costituito da due fasi operative principali, che è stato applicato ad un caso studio rappresentato dal territorio della provincia di Ravenna. La metodologia implementata prevede l’integrazione di numerosi strumenti di supporto alla valutazione dello stato di qualità dell’aria e degli effetti che gli inquinanti atmosferici possono generare su specifici recettori sensibili (popolazione residente, vegetazione, beni materiali). In particolare, la metodologia integra approcci di disaggregazione degli inventari delle emissioni attraverso l’utilizzo di variabili proxy, strumenti modellistici per la simulazione della dispersione degli inquinanti in atmosfera ed algoritmi di allocazione degli strumenti di monitoraggio attraverso la massimizzazione (o minimizzazione) di specifiche funzioni obiettivo. La procedura di allocazione sviluppata è stata automatizzata attraverso lo sviluppo di un software che, mediante un’interfaccia grafica di interrogazione, consente di identificare delle aree ottimali per realizzare le diverse campagne di monitoraggio / The objective of this work is to implement an operational methodology for the design of air monitoring network and air quality monitoring campaigns that use the mobile laboratory, optimizing the chosen positions for the air analyzing devices with respect to different objectives and selection criteria. The review and analysis of approaches and guidance provided by the laws and by different authors of scientific papers allowed to propose a methodological approach consists of two main operational phases, applied to a case study represented by the province of Ravenna. The implemented methodology involves several tools to support the assessment of the air quality and the effects that air pollutants can generate on specific sensitive receptors (resident population, vegetation, materials). In particular, the methodology integrates approaches of disaggregation of emission inventories through the use of proxy variables, modeling tools to simulate the dispersion of pollutants into the atmosphere and algorithms for the allocation of monitoring instruments through the maximization (or minimization) of specific objective functions. The allocation procedure has been automated through the development of a software that, through a graphical I/O interface, allows the identification the optimal areas to implement the different monitoring campaigns.
44

Approccio multidisciplinare per le valutazioni ambientali: Problematiche e sinergie in un uso combinato delle metodologie life cycle assessment (lca) e risk assessment (ra) / Multidisciplinary approach for environmental evaluation: issues and synergisms in combining Life Cycle Assessment (LCA) and Risk Assessment (RA)

Barberio, Grazia <1976> 05 May 2014 (has links)
In questo lavoro di tesi si è elaborato un quadro di riferimento per l’utilizzo combinato di due metodologie di valutazione di impatti LCA e RA, per tecnologie emergenti. L’originalità dello studio sta nell’aver proposto e anche applicato il quadro di riferimento ad un caso studio, in particolare ad una tecnologia innovativa di refrigerazione, basata su nanofluidi (NF), sviluppata da partner del progetto Europeo Nanohex che hanno collaborato all’elaborazione degli studi soprattutto per quanto riguarda l’inventario dei dati necessari. La complessità dello studio è da ritrovare tanto nella difficile integrazione di due metodologie nate per scopi differenti e strutturate per assolvere a quegli scopi, quanto nel settore di applicazione che seppur in forte espansione ha delle forti lacune di informazioni circa processi di produzione e comportamento delle sostanze. L’applicazione è stata effettuata sulla produzione di nanofluido (NF) di allumina secondo due vie produttive (single-stage e two-stage) per valutare e confrontare gli impatti per la salute umana e l’ambiente. Occorre specificare che il LCA è stato quantitativo ma non ha considerato gli impatti dei NM nelle categorie di tossicità. Per quanto concerne il RA è stato sviluppato uno studio di tipo qualitativo, a causa della problematica di carenza di parametri tossicologici e di esposizione su citata avente come focus la categoria dei lavoratori, pertanto è stata fatta l’assunzione che i rilasci in ambiente durante la fase di produzione sono trascurabili. Per il RA qualitativo è stato utilizzato un SW specifico, lo Stoffenmanger-Nano che rende possibile la prioritizzazione dei rischi associati ad inalazione in ambiente di lavoro. Il quadro di riferimento prevede una procedura articolata in quattro fasi: DEFINIZIONE SISTEMA TECNOLOGICO, RACCOLTA DATI, VALUTAZIONE DEL RISCHIO E QUANTIFICAZIONE DEGLI IMPATTI, INTERPRETAZIONE. / In this paper the author propose a framework for combining Life Cycle Assessment (LCA) and Risk Assessment (RA) to support the sustainability assessment of emerging technologies. This proposal includes four steps of analysis: technological system definition; data collection; risk evaluation and impacts quantification; results interpretation. This scheme has been applied to a case study of nanofluid alumina production in two different pilot lines, “single-stage” and “two-stage”. The study has been developed in the NanoHex project (enhanced nano-fluid heat exchange). Goals of the study were analysing the hotspots and highlighting possible trade-off between the results of LCA, which identifies the processes having the best environmental performance, and the results of RA, which identifies the scenarios having the highest risk for workers. Indeed, due to lack of data about exposure limits, exposure-dose relationships and toxicity of alumina nanopowder and nanofluid, the workplace exposure has been evaluated by means of qualitative Risk Assessment, using Stoffenmanager Nano. Though having different aims, LCA and RA have a complementary role in the description of impacts of products/substances/technologies. Their combined use can overcome limits of each of them and allows a wider vision of the problems to better support the decision making process.
45

Applicazione di sistemi di gestione ambientale alla scala locale

Marazza, Diego <1970> 30 May 2007 (has links)
No description available.
46

Optical concentrators for photovoltaic use

Pancotti, Lorenzo <1977> 17 May 2007 (has links)
No description available.
47

Electrical activity in neurons exposed to low level electromagnetic fields: theory and experiments

Mesirca, Pietro <1972> 17 May 2007 (has links)
No description available.
48

Sviluppo di un tomografo multi-energy per lo studio pre-clinico di nuove metodiche diagnostiche finalizzate al riconoscimento precoce della patologia tumorale

Masetti, Simone <1970> 12 June 2008 (has links)
A new multi-energy CT for small animals is being developed at the Physics Department of the University of Bologna, Italy. The system makes use of a set of quasi-monochromatic X-ray beams, with energy tunable in a range from 26 KeV to 72 KeV. These beams are produced by Bragg diffraction on a Highly Oriented Pyrolytic Graphite crystal. With quasi-monochromatic sources it is possible to perform multi-energy investigation in a more effective way, as compared with conventional X-ray tubes. Multi-energy techniques allow extracting physical information from the materials, such as effective atomic number, mass-thickness, density, that can be used to distinguish and quantitatively characterize the irradiated tissues. The aim of the system is the investigation and the development of new pre-clinic methods for the early detection of the tumors in small animals. An innovative technique, the Triple-Energy Radiography with Contrast Medium (TER), has been successfully implemented on our system. TER consist in combining a set of three quasi-monochromatic images of an object, in order to obtain a corresponding set of three single-tissue images, which are the mass-thickness map of three reference materials. TER can be applied to the quantitative mass-thickness-map reconstruction of a contrast medium, because it is able to remove completely the signal due to other tissues (i.e. the structural background noise). The technique is very sensitive to the contrast medium and is insensitive to the superposition of different materials. The method is a good candidate to the early detection of the tumor angiogenesis in mice. In this work we describe the tomographic system, with a particular focus on the quasi-monochromatic source. Moreover the TER method is presented with some preliminary results about small animal imaging.
49

New approaches to open problems in gene expression microarray data

Marconi, Daniela <1979> 12 June 2008 (has links)
In the past decade, the advent of efficient genome sequencing tools and high-throughput experimental biotechnology has lead to enormous progress in the life science. Among the most important innovations is the microarray tecnology. It allows to quantify the expression for thousands of genes simultaneously by measurin the hybridization from a tissue of interest to probes on a small glass or plastic slide. The characteristics of these data include a fair amount of random noise, a predictor dimension in the thousand, and a sample noise in the dozens. One of the most exciting areas to which microarray technology has been applied is the challenge of deciphering complex disease such as cancer. In these studies, samples are taken from two or more groups of individuals with heterogeneous phenotypes, pathologies, or clinical outcomes. these samples are hybridized to microarrays in an effort to find a small number of genes which are strongly correlated with the group of individuals. Eventhough today methods to analyse the data are welle developed and close to reach a standard organization (through the effort of preposed International project like Microarray Gene Expression Data -MGED- Society [1]) it is not unfrequant to stumble in a clinician's question that do not have a compelling statistical method that could permit to answer it.The contribution of this dissertation in deciphering disease regards the development of new approaches aiming at handle open problems posed by clinicians in handle specific experimental designs. In Chapter 1 starting from a biological necessary introduction, we revise the microarray tecnologies and all the important steps that involve an experiment from the production of the array, to the quality controls ending with preprocessing steps that will be used into the data analysis in the rest of the dissertation. While in Chapter 2 a critical review of standard analysis methods are provided stressing most of problems that In Chapter 3 is introduced a method to adress the issue of unbalanced design of miacroarray experiments. In microarray experiments, experimental design is a crucial starting-point for obtaining reasonable results. In a two-class problem, an equal or similar number of samples it should be collected between the two classes. However in some cases, e.g. rare pathologies, the approach to be taken is less evident. We propose to address this issue by applying a modified version of SAM [2]. MultiSAM consists in a reiterated application of a SAM analysis, comparing the less populated class (LPC) with 1,000 random samplings of the same size from the more populated class (MPC) A list of the differentially expressed genes is generated for each SAM application. After 1,000 reiterations, each single probe given a "score" ranging from 0 to 1,000 based on its recurrence in the 1,000 lists as differentially expressed. The performance of MultiSAM was compared to the performance of SAM and LIMMA [3] over two simulated data sets via beta and exponential distribution. The results of all three algorithms over low- noise data sets seems acceptable However, on a real unbalanced two-channel data set reagardin Chronic Lymphocitic Leukemia, LIMMA finds no significant probe, SAM finds 23 significantly changed probes but cannot separate the two classes, while MultiSAM finds 122 probes with score >300 and separates the data into two clusters by hierarchical clustering. We also report extra-assay validation in terms of differentially expressed genes Although standard algorithms perform well over low-noise simulated data sets, multi-SAM seems to be the only one able to reveal subtle differences in gene expression profiles on real unbalanced data. In Chapter 4 a method to adress similarities evaluation in a three-class prblem by means of Relevance Vector Machine [4] is described. In fact, looking at microarray data in a prognostic and diagnostic clinical framework, not only differences could have a crucial role. In some cases similarities can give useful and, sometimes even more, important information. The goal, given three classes, could be to establish, with a certain level of confidence, if the third one is similar to the first or the second one. In this work we show that Relevance Vector Machine (RVM) [2] could be a possible solutions to the limitation of standard supervised classification. In fact, RVM offers many advantages compared, for example, with his well-known precursor (Support Vector Machine - SVM [3]). Among these advantages, the estimate of posterior probability of class membership represents a key feature to address the similarity issue. This is a highly important, but often overlooked, option of any practical pattern recognition system. We focused on Tumor-Grade-three-class problem, so we have 67 samples of grade I (G1), 54 samples of grade 3 (G3) and 100 samples of grade 2 (G2). The goal is to find a model able to separate G1 from G3, then evaluate the third class G2 as test-set to obtain the probability for samples of G2 to be member of class G1 or class G3. The analysis showed that breast cancer samples of grade II have a molecular profile more similar to breast cancer samples of grade I. Looking at the literature this result have been guessed, but no measure of significance was gived before.
50

Computational methods for genome screening

Montanucci, Ludovica <1978> 12 June 2008 (has links)
Motivation An actual issue of great interest, both under a theoretical and an applicative perspective, is the analysis of biological sequences for disclosing the information that they encode. The development of new technologies for genome sequencing in the last years, opened new fundamental problems since huge amounts of biological data still deserve an interpretation. Indeed, the sequencing is only the first step of the genome annotation process that consists in the assignment of biological information to each sequence. Hence given the large amount of available data, in silico methods became useful and necessary in order to extract relevant information from sequences. The availability of data from Genome Projects gave rise to new strategies for tackling the basic problems of computational biology such as the determination of the tridimensional structures of proteins, their biological function and their reciprocal interactions. Results The aim of this work has been the implementation of predictive methods that allow the extraction of information on the properties of genomes and proteins starting from the nucleotide and aminoacidic sequences, by taking advantage of the information provided by the comparison of the genome sequences from different species. In the first part of the work a comprehensive large scale genome comparison of 599 organisms is described. 2,6 million of sequences coming from 551 prokaryotic and 48 eukaryotic genomes were aligned and clustered on the basis of their sequence identity. This procedure led to the identification of classes of proteins that are peculiar to the different groups of organisms. Moreover the adopted similarity threshold produced clusters that are homogeneous on the structural point of view and that can be used for structural annotation of uncharacterized sequences. The second part of the work focuses on the characterization of thermostable proteins and on the development of tools able to predict the thermostability of a protein starting from its sequence. By means of Principal Component Analysis the codon composition of a non redundant database comprising 116 prokaryotic genomes has been analyzed and it has been showed that a cross genomic approach can allow the extraction of common determinants of thermostability at the genome level, leading to an overall accuracy in discriminating thermophilic coding sequences equal to 95%. This result outperform those obtained in previous studies. Moreover, we investigated the effect of multiple mutations on protein thermostability. This issue is of great importance in the field of protein engineering, since thermostable proteins are generally more suitable than their mesostable counterparts in technological applications. A Support Vector Machine based method has been trained to predict if a set of mutations can enhance the thermostability of a given protein sequence. The developed predictor achieves 88% accuracy.

Page generated in 0.0362 seconds