• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 104
  • 12
  • 1
  • 1
  • Tagged with
  • 119
  • 93
  • 84
  • 83
  • 82
  • 82
  • 82
  • 49
  • 19
  • 18
  • 18
  • 17
  • 16
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Silicon nanocrystals downshifting for photovoltaic applications

Sgrignuoli, Fabrizio January 2013 (has links)
In conventional silicon solar cell, the collection probability of light generated carries shows a drop in the high energy range 280-400nm. One of the methods to reduce this loss, is to implement nanometre sized semiconductors on top of a solar cell where high energy photons are absorbed and low energy photons are re-emitted. This effect, called luminescence down-shifter (LDS), modifies the incident solar spectrum producing an enhancement of the energy conversion efficiency of a cell. We investigate this innovative effect using silicon nanoparticles dispersed in a silicon dioxide matrix as active material. In particular, I proposed to model these structures using a transfer matrix approach to simulate its optical properties in combination with a 2D device simulator to estimate the electrical performance. Based on the optimized layer sequences, high efficiency cells were produced within the european project LIMA characterized by silicon quantum dots as active layer. Experimental results demonstrate the validity of this approach by showing an enhancement of the short circuit current density with up to 4%. In addition, a new configuration was proposed to improve the solar cell performances. Here the silicon nanoparticles are placed on a cover glass and not directly on the silicon cells. The aim of this study was to separate the silicon nanocrystals (Si-NCs) layer from the cell. In this way, the solar device is not affected by the Si-NCs layer during the fabrication process, i.e. the surface passivation quality of the cell remains unaffected after the application of the LDS layer. Using this approach, the downshifting contribution can be quantified separately from the passivation effect, as compared with the previous method based on the Si-NCs deposition directly on the solar devices. By suitable choice of the dielectric structures, an improvement in short circuit current of up 1% due to the LDS effect is demonstrated and simulated.
82

Progress of Monte Carlo methods in nuclear physics using EFT-based NN interaction and in hypernuclear systems.

Armani, Paolo January 2011 (has links)
Introduction In this thesis I report the work of my PhD; it treated two different topics, both related by a third one, that is the computational method that I use to solve them. I worked on EFT-theories for nuclear systems and on Hypernuclei. I tried to compute the ground state properties of both systems using Monte Carlo methods. In the first part of my thesis I briefly describe the Monte Carlo methods that I used: VMC (Variational Monte Carlo), DMC (Diffusion Monte Carlo), AFDMC (Auxiliary Field Diffusion Monte Carlo) and AFQMC (Auxiliary Field Quantum Monte Carlo) algorithms. I also report some new improvements relative to these methods that I tried or suggested: I remember the fixed hypernode extension (§ 2.6.2) for the DMC algorithm, the inclusion of the L2 term (§ 3.10) and of the exchange term (§ 3.11) into the AFDMC propagator. These last two are based on the same idea used by K. Schmidt to include the spin-orbit term in the AFDMC propagator (§ 3.9). We mainly use the AFDMC algorithm but at the end of the first part I describe also the AFQMC method. This is quite similar in principle to AFDMC, but it was newer used for nuclear systems. Moreover, there are some details that let us hope to be able to overcome with AFQMC some limitations that we find in AFDMC algorithm. However we do not report any result relative to AFQMC algorithm, because we start to implement it in the last months and our code still requires many tests and debug. In the second part I report our attempt of describing the nucleon-nucleon interaction using EFT-theory within AFDMC method. I explain all our tests to solve the ground state of a nucleus within this method; hence I show also the problems that we found and the attempts that we tried to overcome them before to leave this project. In the third part I report our work about Hypernuclei; we tried to fit part of the ΛN interaction and to compute the Hypernuclei Λ-hyperon separation energy. Nevertheless we found some good and encouraging results, we noticed that the fixed-phase approximation used in AFDMC algorithm was not so small like assumed. Because of that, in order to obtain interesting results, we need to improve this approximations or to use a better method; hence we looked at AFQMC algorithm aiming to quickly reach good results.
83

A new approach to optimal embedding of time series

Perinelli, Alessio 20 November 2020 (has links)
The analysis of signals stemming from a physical system is crucial for the experimental investigation of the underlying dynamics that drives the system itself. The field of time series analysis comprises a wide variety of techniques developed with the purpose of characterizing signals and, ultimately, of providing insights on the phenomena that govern the temporal evolution of the generating system. A renowned example in this field is given by spectral analysis: the use of Fourier or Laplace transforms to bring time-domain signals into the more convenient frequency space allows to disclose the key features of linear systems. A more complex scenario turns up when nonlinearity intervenes within a system's dynamics. Nonlinear coupling between a system's degrees of freedom brings about interesting dynamical regimes, such as self-sustained periodic (though anharmonic) oscillations ("limit cycles"), or quasi-periodic evolutions that exhibit sharp spectral lines while lacking strict periodicity ("limit tori"). Among the consequences of nonlinearity, the onset of chaos is definitely the most fascinating one. Chaos is a dynamical regime characterized by unpredictability and lack of periodicity, despite being generated by deterministic laws. Signals generated by chaotic dynamical systems appear as irregular: the corresponding spectra are broad and flat, prediction of future values is challenging, and evolutions within the systems' state spaces converge to strange attractor sets with noninteger dimensionality. Because of these properties, chaotic signals can be mistakenly classified as noise if linear techniques such as spectral analysis are used. The identification of chaos and its characterization require the assessment of dynamical invariants that quantify the complex features of a chaotic system's evolution. For example, Lyapunov exponents provide a marker of unpredictability; the estimation of attractor dimensions, on the other hand, highlights the unconventional geometry of a chaotic system's state space. Nonlinear time series analysis techniques act directly within the state space of the system under investigation. However, experimentally, full access to a system's state space is not always available. Often, only a scalar signal stemming from the dynamical system can be recorded, thus providing, upon sampling, a scalar sequence. Nevertheless, by virtue of a fundamental theorem by Takens, it is possible to reconstruct a proxy of the original state space evolution out of a single, scalar sequence. This reconstruction is carried out by means of the so-called embedding procedure: m-dimensional vectors are built by picking successive elements of the scalar sequence delayed by a lag L. On the other hand, besides posing some necessary conditions on the integer embedding parameters m and L, Takens' theorem does not provide any clue on how to choose them correctly. Although many optimal embedding criteria were proposed, a general answer to the problem is still lacking. As a matter of fact, conventional methods for optimal embedding are flawed by several drawbacks, the most relevant being the need for a subjective evaluation of the outcomes of applied algorithms. Tackling the issue of optimally selecting embedding parameters makes up the core topic of this thesis work. In particular, I will discuss a novel approach that was pursued by our research group and that led to the development of a new method for the identification of suitable embedding parameters. Rather than most conventional approaches, which seek a single optimal value for m and L to embed an input sequence, our approach provides a set of embedding choices that are equivalently suitable to reconstruct the dynamics. The suitability of each embedding choice m, L is assessed by relying on statistical testing, thus providing a criterion that does not require a subjective evaluation of outcomes. The starting point of our method are embedding-dependent correlation integrals, i.e. cumulative distributions of embedding vector distances, built out of an input scalar sequence. In the case of Gaussian white noise, an analytical expression for correlation integrals is available, and, by exploiting this expression, a gauge transformation of distances is introduced to provide a more convenient representation of correlation integrals. Under this new gauge, it is possible to test—in a computationally undemanding way—whether an input sequence is compatible with Gaussian white noise and, subsequently, whether the sequence is compatible with the hypothesis of an underlying chaotic system. These two statistical tests allow ruling out embedding choices that are unsuitable to reconstruct the dynamics. The estimation of correlation dimension, carried out by means of a newly devised estimator, makes up the third stage of the method: sets of embedding choices that provide uniform estimates of this dynamical invariant are deemed to be suitable to embed the sequence.The method was successfully applied to synthetic and experimental sequences, providing new insight into the longstanding issue of optimal embedding. For example, the relevance of the embedding window (m-1)L, i.e. the time span covered by each embedding vector, is naturally highlighted by our approach. In addition, our method provides some information on the adequacy of the sampling period used to record the input sequence.The method correctly distinguishes a chaotic sequence from surrogate ones generated out of it and having the same power spectrum. The technique of surrogate generation, which I also addressed during my Ph. D. work to develop new dedicated algorithms and to analyze brain signals, allows to estimate significance levels in situations where standard analytical algorithms are unapplicable. The novel embedding approach being able to tell apart an original sequence from surrogate ones shows its capability to distinguish signals beyond their spectral—or autocorrelation—similarities.One of the possible applications of the new approach concerns another longstanding issue, namely that of distinguishing noise from chaos. To this purpose, complementary information is provided by analyzing the asymptotic (long-time) behaviour of the so-called time-dependent divergence exponent. This embedding-dependent metric is commonly used to estimate—by processing its short-time linearly growing region—the maximum Lyapunov exponent out of a scalar sequence. However, insights on the kind of source generating the sequence can be extracted from the—usually overlooked—asymptotic behaviour of the divergence exponent. Moreover, in the case of chaotic sources, this analysis also provides a precise estimate of the system's correlation dimension. Besides describing the results concerning the discrimination of chaotic systems from noise sources, I will also discuss the possibility of using the related correlation dimension estimates to improve the third stage of the method introduced above for the identification of suitable embedding parameters. The discovery of chaos as a possible dynamical regime for nonlinear systems led to the search of chaotic behaviour in experimental recordings. In some fields, this search gave plenty of positive results: for example, chaotic dynamics was successfully identified and tamed in electronic circuits and laser-based optical setups. These two families of experimental chaotic systems eventually became versatile tools to study chaos and its possible applications. On the other hand, chaotic behaviour is also looked for in climate science, biology, neuroscience, and even economics. In these fields, nonlinearity is widespread: many smaller units interact nonlinearly, yielding a collective motion that can be described by means of few, nonlinearly coupled effective degrees of freedom. The corresponding recorded signals exhibit, in many cases, an irregular and complex evolution. A possible underlying chaotic evolution—as opposed to a stochastic one—would be of interest both to reveal the presence of determinism and to predict the system's future states. While some claims concerning the existence of chaos in these fields have been made, most results are debated or inconclusive. Nonstationarity, low signal-to-noise ratio, external perturbations and poor reproducibility are just few among the issues that hinder the search of chaos in natural systems. In the final part of this work, I will briefly discuss the problem of chasing chaos in experimental recordings by considering two example sequences, the first one generated by an electronic circuit and the second one corresponding to recordings of brain activity. The present thesis is organized as follows. The core concepts of time series analysis, including the key features of chaotic dynamics, are presented in Chapter 1. A brief review of the search for chaos in experimental systems is also provided; the difficulties concerning this quest in some research fields are also highlighted. Chapter 2 describes the embedding procedure and the issue of optimally choosing the related parameters. Thereupon, existing methods to carry out the embedding choice are reviewed and their limitations are pointed out. In addition, two embedding-dependent nonlinear techniques that are ordinarily used to characterize chaos, namely the estimation of correlation dimension by means of correlation integrals and the assessment of maximum Lyapunov exponent, are presented. The new approach for the identification of suitable embedding parameters, which makes up the core topic of the present thesis work, is the subject of Chapter 3 and 4. While Chapter 3 contains the theoretical outline of the approach, as well as its implementation details, Chapter 4 discusses the application of the approach to benchmark synthetic and experimental sequences, thus illustrating its perks and its limitations. The study of the asymptotic behaviour of the time-dependent divergent exponent is presented in Chapter 5. The alternative estimator of correlation dimension, which relies on this asymptotic metric, is discussed as a possible improvement to the approach described in Chapters 3, 4. The search for chaos out of experimental data is discussed in Chapter 6 by means of two examples of real-world recordings. Concluding remarks are finally drawn in Chapter 7.
84

La distanza conta: Tre elaborati in Economia Spaziale / DISTANCE MATTERS: THREE ESSAYS IN SPATIAL ECONOMIC ANALYSIS

CALEGARI, ELENA 27 May 2016 (has links)
Waldo Tobler, con la sua prima legge della geografia, afferma “Ogni cosa è correlata con qualsiasi altra, ma le cose vicine sono più relazionate di quelle lontane" (Tobler, 1970). Se questo era certamente vero nel 1970, tale convinzione è stata messa in discussione con l’avvento delle Tecnologie dell’Informazione e della Comunicazione (ICT). Nel dibattito riguardo al processo di globalizzazione molti studiosi e giornalisti sostengono infatti che, con la velocizzazione delle telecomunicazioni, la distanza fisica è destinata a perdere il proprio potere esplicativo relativamente a molti fenomeni socio-economici (Cairncross, 2001; Friedman, 2005). Questa dissertazione vuole contribuire al dibattito rispondendo, seppure parzialmente, alla domanda “La distanza importa ancora?” e definire alcune possibili implicazioni di policy. L’obiettivo è quello di mostrare il ruolo della distanza geografica in tre diversi contesti economici caratterizzati da differenti dimensioni dell’unità di analisi. I risultati suggeriscono che, anche se su scala globale lo sviluppo delle nuove tecnologie ha modificato la percezione individuale della distanza come deterrente alle interazioni, lo spazio geografico mantiene ancora la sua rilevanza del definire le relazioni socio-economiche locali, aumentando il ruolo di città e regioni quali centri della maggioranza delle attività economiche. / Waldo Tobler, with his first law of geography, stated “Everything is related to everything else, but near things are more related than distant things" (Tobler, 1970). If it was certainly true in 1970, this belief is called into question in an era of development of Information and Communication Technologies (ICTs). In the debate over globalization processes, several scholars and journalists argue indeed that, with the increasing speed of telecommunications, physical distance is losing its explanatory power as determinant of socio-economical relationships (Cairncross, 2001; Friedman, 2005). This dissertation aims to give a contribution to this debate, partially answering to the broad question “Does distance still matter?" and to draw possible policy implications. The purpose is to show the role of geographical distance in three different economic environments, characterized by diversified size of the unit of analysis. Results suggest that, even if at a global scale improvements in ICTs have changed the individual perception of the distance as deterrent in interactions, geographical space still maintains its relevance in defining local socio-economic relationships, increasing the role of cities and regions as the core of most of economic activities.
85

L'OBBLIGO DI SICUREZZA DEL DATORE DI LAVORO, TRA PRESCRIZIONI NORMATIVE ED ORGANIZZAZIONE AZIENDALE

CHAPELLU, DANIELE 15 April 2014 (has links)
La tesi si inserisce nel filone delle opere sulla sicurezza del lavoro. I primi due capitoli ripercorrono temi classici della materia (la valenza dell'art. 2087 c.c., anche nell'ambito del sinallagma contrattuale e l' emersione del sistema aziendale di sicurezza), il terzo e il quarto approfondiscono il tema di ricerca in un'ottica riferita all'organizzazione aziendale, analizzando il ruolo di tutti i soggetti coinvolti nell'obbligo di sicurezza in relazione ai modelli di organizzazione e gestione come previsti dal d.lgs. n. 81/2008, e d. lgs. n.231/2001. Il lavoro si conclude con una ricerca empirica che offre materiali di "prima mano" per comprendere le sfide e i problemi concreti incontrati dalle imprese nell'attuazione dei sistemi aziendali di sicurezza. / The dissertation concerns the important topic of the occupational safety. The first and the second paragraphs recall, with a remarkable bibliography, classical themes of the subject (the value of the article 2087 of the Civil Code, also in a contractual relationship, and the greater consideration of the company safety system, with the Decree n. 626/1994 and the Decree 81/2008). The third and the fourth paragraph debate about the organizational aspects of safety in the workplace. After having well examined the roles of all the subjects involved in the fulfillment of the safety obligation (the third paragraph), the dissertation argues on the models of organization and management, a topic often neglected by the Labour Law Scholars. Those kind of models have acquired renewed importance after of the Law n. 123/2007 and the Decree n. 81/2008 because of the enforcement of the crime corporate responsibility provided by the Decree n. 231/2001. The fifth and last paragraph has an empirical approach. It allows to analyze interesting data recorded through some interviews, regarding the problems engaged by the companies in the actualization of the company safety system.
86

Tre saggi su mobilità del lavoro e disoccupazione / Three essays on Labour Mobility and Unemployment

MUSSIDA, CHIARA 13 November 2009 (has links)
La tesi si compone di tre saggi su disoccupazione e mobilità del lavoro in Italia, presentando anche un focus sulla regione Lombardia, oltre che da una parte iniziale che inquadra tali tematiche. Il primo capitolo offre infatti una disamina degli sviluppi ed empirici connessi a disoccupazione e mobilità del lavoro. L’obiettivo di questa parte introduttiva è duplice. Da un lato si cerca di fornire un quadro pressochè esaustivo sulle evoluzioni teoriche ed empiriche connesse alle tematiche citate. D’altro lato si introducono le analisi oggetto dei successivi saggi come evoluzione degli sviluppi proposti dalla letteratura, enfatizzandone logiche sottostanti ed originalità. Il primo saggio analizza le determinanti della durata della disoccupazione ed i relativi “competing risks” per la regione Lombardia. La scelta di tale contesto non è casuale. La Lombardia, infatti, rappresenta una delle regioni economicamente più sviluppate ed i risultati ottenuti con tali metodologie di stima possono fornire spunti utili e rappresentativi sia delle regioni europee maggiormente sviluppate, sia di altre rilevanti regioni italiane (Emilia Romagna e Toscana). Il secondo saggio estende l’applicazione di modelli di durata e modelli a rischi competitivi all’intero territorio nazionale. In questo modo è possibile enfatizzare la rilevanza di tali tematiche per il contesto italiano, ed ottenere un quadro esaustivo circa l’evoluzione del fenomeno della durata della disoccupazione. Le tecniche utilizzate per tali analisi, ovviamente, differiscono ripetto a quelle impegate per la regione Lombardia, ed anche questo aspetto consente interessanti considerazioni. Il terzo saggio sposta l’attenzione alla rilevante tematica della mobilità del mercato del lavoro. Tale aspetto è ovviamente connesso al fenomeno della disoccupazione, e consente di approfondirne nonché di delinearne le possibili cause. In tale capitolo vengono proposte due metodologie di analisi. In primo luogo, ed a livello macro, sono fornite le stime aggregate dei flussi fra i principali stati o condizioni (occupazione, disoccupazione, inattività) del mercato del lavoro. Questo primo step consente appunto una prima quantificazione del fenomeno della mobilità. La seconda parte del capitolo si focalizza invece su una stima - a livello micro - delle determinanti delle transizioni fra gli stati del mercato del lavoro. Tale aspetto consente appunto di investigare ed esaminare le cause sottese alla mobilità riscontrata a livello macro. / Structured in three essays, this thesis focus on unemployment and labour mobility in Italy and Lombardy (the biggest Italian’s region). The first essay offers a picture of the main theoretical and the empirical issues related to these complex phenomena. The purpose of this section is twofold. On one hand we aim to offer an exhaustive picture of the theoretical and empirical developments of such phenomena. On the other hand, we introduce the empirical investigations of the subsequent essays as evolutions of the ones proposed by literature. We also emphases the original contribution and the logic behind. The second essay investigates the determinants of the unemployment duration and of the related competing risks (CRM hereafter) for Lombardy. The choice to concentrate the initial part of this dissertation on Lombardy is primarily driven by two factors. First, there is interest in applying relevant techniques to a regional context characterized by a certain degree of homogeneity of economic indicators. Further, Lombardy is one of the most important Italian regions (confirmed by many economics indicators), and is quite homogeneous in terms of labour market indicators (only little differences between provinces, with the north-east with the fewest unemployment problems), This allows verifying the effectiveness of these investigations of the determinants of unemployment duration and the related CRM without dealing with the typical dualism between north and south which is a structural feature of the Italian labour market. This is a way to investigate in depth the characteristics of the relevant phenomenon of unemployment for a significant partition of Italy, which is representative of both richest regions in Europe and Italian regions as well (such as Tuscany or Emilia Romagna). The third essay enlarges the attention to Italy by employing techniques of unemployment duration and competing risks to analyse the overall Italian unemployment and its main exit routes. Those are tools to get an exhaustive picture and relevant insights on the evolution of the Italian unemployment duration. The techniques employed for the overall country obviously differ from the ones used for the region of Lombardy, and these differences also offer the scope for interesting considerations. The fourth essay deals with the relevant issue of labour market mobility. This is a theme quite linked to unemployment, since it allows understanding and exploring its causes. We focus on two different kind of analysis. At macro level, we estimate the gross flows between the relevant labour market states of employment, unemployment, and inactivity (three-state representation of the labour market) to quantify the overall labour market mobility. The second part of this section, instead, offers micro econometrics estimates of the determinants of such labour market transitions, to investigate the causes of such mobility.
87

Imprese multinazionali e corruzione internazionale: analisi empirica, strumenti di contrasto e modelli preventivi / Multinational companies and international bribery: empirical analysis, fighting tools and prevention programs

DELL'OSSO, VINCENZO 25 March 2013 (has links)
Partendo da un’estesa analisi empirica della corruzione attiva di pubblici agenti stranieri, che rivela le dinamiche individuali, organizzative e ambientali che ne caratterizzano la realizzazione, la tesi si propone di condurre un esame critico sia della fattispecie criminosa di cui all’art. 322-bis, co. 2, n. 2, che della relativa responsabilità ex d. lgs. 231/2001 a carico degli enti metaindividuali, onde esaminarne i profili di efficacia nella prospettiva preventiva. Lo studio normativo si serve, oltre che della ancora esigua giurisprudenza italiana sinora intervenuta in materia, anche di sei ipotetici casi di corruzione internazionale da parte di imprese multinazionali con sede in Italia, basate su altrettanti casi reali ricostruiti grazie all’attività di enforcement del Foreign Corrupt Practices Act statunitense. Il lavoro di analisi evidenzia diverse esigenze di adeguamento normativo o di mutamento degli indirizzi interpretativi, che interessano la sfera penalistica (toccando in particolare l’oggetto giuridico della fattispecie, la definizione delle qualifiche pubblicistiche, l’estensione applicative della figura del traffico di influenze, la definizione degli obblighi impeditivi delle diverse cariche societarie, in particolare nei gruppi di società), quella processualistica (specialmente in tema di bis in idem internazionale) e quella della responsabilità da reato degli enti (in particolare in punto di responsabilità del cessionario d’azienda per gli illeciti di corruzione internazionale commessi nell’ambito dell’azienda ceduta, e dunque in materia di obblighi di pre-acquistion due diligence). La necessità di un cambio sostanziale si palesa altresì in relazione alla metodologia e ai contenuti dei modelli di organizzazione, gestione e controllo delle imprese multinazionali per la prevenzione del reato di corruzione internazionale, e di conseguenza nei criteri di valutazione dell’adeguatezza del modello in sede giudiziale. / Moving from an exhaustive empirical analysis of the active bribery of foreign public officials, which reveals its individual, organizational and environmental issues/dynamics, the thesis conducts a critical review of the criminal offence set forth in the Art. 322-bis, par. 2, n. 2 of the criminal code, and the consequent liability of legal entities, aimed at testing their real efficiency in terms of crime prevention. The legal analysis is conducted not only on the basis of the still limited Italian jurisprudence, but also by examining six hypothetical cases of international bribery perpetrated by Italian MNEs, which follow the structure of six real U.S. F.C.P.A. violation cases. The study illustrates the need for a series of normative adjustments or an interpretative adaptation of the legislation currently in force, which concern the sphere of criminal law (especially the individuation of the protected interest, the concept of foreign public official, the extent of trading in influence offence, the definition of the company’s officials duties to prevent other’s commission of bribery in particular among the corporate groups), criminal procedure law (specifically, the risk of an international double jeopardy) and the legal person’s liability regime (above all in the field of successor’s liability and pre-acquistion due diligence duties). The demand for a substantial change is revealed as well in relation with the methodology and the contents of the multi-national enterprises anti-bribery compliance programs, and thereof in the criteria for their judicial assessment.
88

Come legare modelli CGE a modelli di microsimulazione: questioni metodologiche ed applicate / LINKING CGE AND MICROSIMULATION MODELS: METHODOLOGICAL AND APPLIED ISSUES

COLOMBO, GIULIA 07 April 2008 (has links)
Questa tesi offre una descrizione dettagliata di come i modelli di equilibrio generale computazionale (CGE) ed i modelli di microsimulazione possano essere utilizzati congiuntamente, partendo dalla letteratura piú recente sull'argomento, e focalizzando l'attenzione in particolare sulla letteratura riguardante i paesi in via di sviluppo. La ragione principale per la quale questi modelli vengono utilizzati congiuntamente risiede nel fatto che il ricercatore vuole essere in grado di studiare contemporaneamente l'eterogeneità degli agenti economici e la complessità della distribuzione del reddito, ed allo stesso tempo di valutare gli effetti macroeconomici delle riforme. Nell'ultimo capitolo costruiamo un modello CGE-microsimulazione per l'economia del Nicaragua. Esso si rivela particolarmente adatto alla riforma di politica economica che vogliamo simulare: l'accordo di libero scambio commerciale tra i paesi dell'America Centrale e gli Stati Uniti è infatti una riforma di tipo macroeconomico, la quale potrebbe tuttavia avere effetti significativi sulla distribuzione del reddito. Con questo modello analizzeremo quindi gli effetti dell'accordo commerciale con gli Stati Uniti sulla distribuzione del reddito in Nicaragua. I risultati dell'analisi registrano soltanto piccole variazioni sia nelle principali variabili macroeconomiche che nella distribuzione del reddito e negli indici di povertà. / This thesis wants to give an assessment and a detailed description of how Computable General Equilibrium (CGE) and microsimulation models can be linked together, taking inspiration from the current literature, with a special focus concerning the literature on developing countries. The main reason why these models are linked together is that the modeller wants to be able to take into account full agents' heterogeneity and the complexity of income distribution, and at the same time to analyse the macroeconomic effects of the policy reforms. In the last chapter, we build a CGE-microsimulation model for the economy of Nicaragua. This model appears to be particularly suited to the policy reform we are willing to simulate with the model: the Free Trade Agreement of Central American countries with USA is mainly a macroeconomic reform, which on the other hand can have important effects on the distribution of income and on poverty. With such a model we will study the possible changes in the distribution of income in Nicaragua deriving from the Free Trade Agreement with USA. Our analysis finds only small changes both in the main macroeconomic variables and in the distribution of income and poverty indices.
89

NANOFOOD: IL QUADRO NORMATIVO EUROPEO SUL FUTURO DEL CIBO / NANOFOOD: THE EUROPEAN LEGAL FRAMEWORK FOR THE FUTURE OF FOOD

LEONE, LUCA 19 February 2014 (has links)
Il lavoro di ricerca ha a oggetto l’analisi del modello europeo (UE) di regolamentazione delle nanotecnologie nel settore agroalimentare, con riferimento agli aspetti etico-giuridici e sociali, ai fini della definizione del quadro normativo di riferimento nella sua relazione con la dimensione di complessità e incertezza intrinseca nel sapere scientifico-tecnologico. La prospettiva teorica da cui muove l’analisi è la co-produzione tra i linguaggi della scienza e del diritto proposta dagli STS (Science & Technologies Studies). Partendo dalla descrizione degli aspetti scientifici dei nanomateriali e delle applicazioni nanotecnologiche nel settore alimentare, il lavoro analizza, in primo luogo, le problematiche correlate alle procedure di gestione del rischio – dalle prospettive più riduzionistiche della cd. “scienza del rischio” nell'innovazione alle più complesse modalità di valutazione integrata del rischio. L’indagine s’incentra, quindi, sulle forme che la normazione sta assumendo nell'intreccio con i saperi delle nanotecnologie, attraverso un approccio comparatistico delle esperienze normative europea e statunitense. L’ultima parte del lavoro s’indirizza, infine, all’analisi delle esigenze di democraticità sottese alle suddette scelte scientifico-giuridiche, problematizzando il concetto di governance anticipatoria e responsabile delle nanotecnologie (concetto correlato all’idea di riuscire a guidare i processi di innovazione attivamente), alla luce del rapporto tra conoscenze scientifiche, politiche agroalimentari e diritto. / In recent decades technoscientific innovation has pushed the food boundaries to a new frontier of nanofood. Such a term refers to an array of food products, whose processes of growing, production and packaging involve nanoscale (nanotechnology and nanosciences) knowledges and applications. This research focuses on the analysis of the European (EU) regulatory framework in the field of agrofood nanotechnology. The analysis considers the salient features of emerging applications of nanotechnologies in the agrofood sector and compares the legal framework on nanofood in the EU with the USA’s regulatory approach. It also develops an interpretation of the normative evolution in the EU, by trying to understand what is the role of science in governing technological risks in nanofood safety, and assessing how adequate the regulatory instruments are in achieving the goal of responsible research and innovation as proposed within the process of rethinking European governance.
90

CRISIS, INSOLVENCY AND RESTRUCTURING. AN AMERICAN MODEL IN EUROPE: THE Z-SCORE. A NEW APPROACH AND POSSIBLE EVOLUTIONS

CERRI, ANDREA 31 March 2014 (has links)
Dopo una delle peggiori crisi economica e finanziaria mondiale , gli studi sulla previsione delle insolvenze sono diventato uno degli argomenti più dibattuti tra gli studiosi e ricercatori. Al fine di soddisfare le esigenze sia di valutazione interna sia degli investitori professionali , lo studio riscopre il modello "Z - score" di Altman nella sua forma originale , nota per la sua semplicità. Il modello, ancora largamente utilizzato nei mercati statunitensi, è per sua natura poco utilizzato nell’analisi di società europee. La tesi analizza e descrive le caratteristiche dello Z -score, valutandone i risultati come strumento per la previsione di insolvenza nel mercato europeo. Lo studio è condotto su 568 società , prese dagli indici azionari di 7 mercati europei , tra il 2000 e il 2010 . I risultati del test evidenziano una grande variabilità di risultato tra i diversi settori industriali. Il modello risulta semplice ed efficace, ma sostanzialmente incapace di prevedere il rischio di default in Europa, se utilizzato nella sua forma originale . La seconda parte della ricerca studia pertanto come i risultati del modello possano essere valutati da una nuova prospettiva per i mercati europei, concentrandosi su singoli settori industriali. Lo Z score viene testato su un campione di imprese in buona salute ed un altro di aziende insolventi, per 3 gruppi industriali diversi. La ricerca cerca anche di valutare elementi qualitativi accanto a quelli quantitativi, al fine di analizzare in maniera completa il rischio di insolvenza. / After one of the worst world economic and financial crisis, the insolvency prediction has become one of the most debatable topics among scholars. In order to satisfy both the professional investors’ needs and the internal evaluation process, the Thesis rediscovers the original Altman “Z-score” model, known for its convenience. This model is still largely used in the US equity markets but, also for its origin, has hardly been applied to the European equity index. The Thesis investigates and describes the operating characteristics of Altman’s Z-score, evaluating its performance as a tool for insolvency prediction in today's European market. The base model capability is tested examining 568 companies, listed in the main stock indexes of 7 European markets, between 2000 and 2010. A large variability among different industries arises from the analysis conducted. The Thesis results prove that the model is user-friendly but a substantial inability to predict the risk of default in Europe if used in its original form. The second research question try to analyse how could the model be useful for the European markets, testing the Z score over good heath and insolvent firms from 3 industrial groups. The research studies how the model’s results could be evaluated from a new perspective, focusing on individual industrial sectors results. The research also tries to evaluate qualitative elements alongside the quantitative ones, in order to give a harmonized and comprehensive estimation of the insolvency risk.

Page generated in 0.0844 seconds