81 |
Knowledge discovery for stochastic models of biological systemsForlin, Michele January 2010 (has links)
Biology is the science of life and living organisms. Empowered by the deployment of several automated experimental frameworks, this discipline has seen a tremendous growth during the last decades. Recently, the focus towards studying biological systems holistically, has lead to biology converging with other disciplines. In particular, computer science is playing an increasingly important role in biology, because of its ability to disentangle complex system level issues. This increasing interplay between computer science and biology has lead to great progress in both fields and to the opening of new important areas for research.
In this thesis we present methods and approaches to tackle the problem of knowledge discovery in computational biology from a stochastic perspective. Major bottlenecks in adopting a stochastic representation can be overcome with the use of proper methodologies by integrating statistics and computer science. In particular we focus on parameter inference for stochastic models and efficient model analysis. We show the application of these approaches on real biological case studies aiming at inferring new knowledge even when a priori (and/or experimental) information is limited.
|
82 |
Deep neural network models for image classification and regressionMalek, Salim January 2018 (has links)
Deep learning, a branch of machine learning, has been gaining ground in many research fields as well as practical applications. Such ongoing boom can be traced back mainly to the availability and the affordability of potential processing facilities, which were not widely accessible than just a decade ago for instance. Although it has demonstrated cutting-edge performance widely in computer vision, and particularly in object recognition and detection, deep learning is yet to find its way into other research areas. Furthermore, the performance of deep learning models has a strong dependency on the way in which these latter are designed/tailored to the problem at hand. This, thereby, raises not only precision concerns but also processing overheads. The success and applicability of a deep learning system relies jointly on both components. In this dissertation, we present innovative deep learning schemes, with application to interesting though less-addressed topics. In this respect, the first covered topic is rough scene description for visually impaired individuals, whose idea is to list the objects that likely exist in an image that is grabbed by a visually impaired person, To this end, we proceed by extracting several features from the respective query image in order to capture the textural as well as the chromatic cues therein. Further, in order to improve the representativeness of the extracted features, we reinforce them with a feature learning stage by means of an autoencoder model. This latter is topped with a logistic regression layer in order to detect the presence of objects if any. In a second topic, we suggest to exploit the same model, i.e., autoencoder in the context of cloud removal in remote sensing images. Briefly, the model is learned on a cloud-free image pertaining to a certain geographical area, and applied afterwards on another cloud-contaminated image, acquired at a different time instant, of the same area. Two reconstruction strategies are proposed, namely pixel-based and patch-based reconstructions.
From the earlier two topics, we quantitatively demonstrate that autoencoders can play a pivotal role in terms of both (i) feature learning and (ii) reconstruction and mapping of sequential data.
Convolutional Neural Network (CNN) is arguably the most utilized model by the computer vision community, which is reasonable thanks to its remarkable performance in object and scene recognition, with respect to traditional hand-crafted features. Nevertheless, it is evident that CNN naturally is availed in its two-dimensional version. This raises questions on its applicability to unidimensional data. Thus, a third contribution of this thesis is devoted to the design of a unidimensional architecture of the CNN, which is applied to spectroscopic data. In other terms, CNN is tailored for feature extraction from one-dimensional chemometric data, whilst the extracted features are fed into advanced regression methods to estimate underlying chemical component concentrations. Experimental findings suggest that, similarly to 2D CNNs, unidimensional CNNs are also prone to impose themselves with respect to traditional methods. The last contribution of this dissertation is to develop new method to estimate the connection weights of the CNNs. It is based on training an SVM for each kernel of the CNN. Such method has the advantage of being fast and adequate for applications that characterized by small datasets.
|
83 |
When 1 in 200 is higher than 5 in 1000: the "1 in X effect" on the perceived probability of having a Down syndrome-affected childBarilli, Elisa January 2010 (has links)
Among numerical formats available to express probability, ratios (i.e., frequencies) are extensively employed in risk communication, due perhaps to an intuitive sense of their clarity and simplicity. The present thesis was designed to investigate how the use of superficially different but mathematically equivalent ratio formats affects the magnitude perception of the probability that is conveyed. In particular, focus of research was the influence that those expressions, when employed in risk communication of prenatal screening test results, have on prospective parents’ perceptions of the chance of having a Down syndrome-affected child. No clear evidence was found in the literature, on whether the choice of one of the equivalent ratio format that can be used to state a given probability does matter in terms of subjective perception of the chance. Indeed, existent studies deliver contrasting results, and theories elaborated on those basis point in diverging directions. These could be summarised in the suggestion, on the one hand, that people tend to neglect denominators in ratios (hence they judge 10 in 100 as larger than 1 in 10: “Ratio-bias” or “denominator neglect”) and, on the other hand, in a claim that people neglect numerators, rather than denominators (hence they rate 1 in 10 as larger than 10 in 100: “group-diffusion” or “reference group” effect). Nevertheless, implications of either group of theories could not entirely be transferred to the specific issue at study, mainly because of problems of ecological validity (type of scenario and stimuli, experimental design). Hence, provided the necessary adjustments to both the original experimental designs and materials, we tested empirically the applicability of those predictions to the specific case under examination. Subjective evaluations of equivalent ratios presented between-subjects in scenario paradigm were analysed by means of the magnitude assessments given by a total number of 1673 participants on Likert scales. Overall, results of a series of 12 main studies pointed to a new bias which we dubbed the “1 in X effect” given the triangulation of its source to that specific ratio format. Indeed, findings indicated, that laypeople’ subjective estimation of the same probability presented through a “1 in X” format (e.g., 1 in 200) and an “N in X*N” format (e.g., 5 in 1000) varied significantly and in a consistent way. In particular, a given probability was systematically perceived as bigger and more alarming when expressed in the first rather than in second format, an effect clearly inconsistent with the idea of denominator neglect. This effect was replicated across different populations and probability magnitudes. Practical implications of these findings for health communication have been addressed in a dedicated section, all the more necessary considering that in one study on health-care professionals we had found, that they appeared themselves de-sensitized to the “1 in X effect” (seemingly because of their daily use of probabilistic ratios). While the effect was not attenuated in laypeople by a classic communicative intervention (i.e., a verbal analogy), it disappeared with one of the most employed visual aids, namely an icon array. Furthermore, in a first attempt to pinpoint the cognitive processes responsible for the bias, the affective account stemming from literature on dual-process theories has not received support, contrary to our expectations. Hence, the most likely origin for the bias seems to reside either, as suggested by some inspections, in a specific motivation to process the information, and/or in the increased ability to see oneself or others as that affected when a “1 in X” format is processed. Clearly, further empirical research is needed in order to attain this cognitive level of explanation.
|
84 |
The influence of the population contact network on the dynamics of epidemics transmissionOttaviano, Stefania January 2016 (has links)
In this thesis we analyze the relationship between epidemiology and network theory, starting from the observation that the viral propagation between interacting agents is determined by intrinsic characteristics of the population contact network. We aim to investigate how a particular network structure can impact on the long-term behavior of epidemics. This field is way too large to be fully discussed; we limit ourselves to consider networks that are partitioned into local communities, in order to incorporate realistic contact structures into the model. The gross structure of hierarchical networks of this kind can be described by a quotient graph. The rationale of this approach is that individuals infect those belonging to the same community with higher probability than individuals in other communities. We describe the epidemic process as a continuous-time individual-based susceptible–infected–susceptible (SIS) model using a first-order mean-field approximation, both in homogeneous and in heterogeneous setting. For this mean-field model we show that the spectral radius of the smaller quotient graph, in connection with the infecting and curing rates, is related to the epidemic threshold, and it gives conditions in order to decide whether the overall healthy-state defines a globally asymptotically stable or an unstable equilibrium. Moreover we show that above the threshold another steady-state exists that can be computed using a lower-dimensional dynamical system associated with the evolution of the process on the quotient graph. Our investigations are based on the graph-theoretical notion of equitable partition and of its recent and rather flexible generalization, that of almost equitable partition. We also consider the important issue related to the control of the infectious disease. Taking into account the connectivity of the network, we provide a cost-optimal distribution of resources to prevent the disease from persisting indefinitely in the population; for a particular case of two-level immunization problem we report on the construction of a polynomial time complexity algorithm. In the second part of the thesis we include stochasticity in the model, considering the infection rates in the form of independent stochastic processes. This allows us to get stochastic differential equation for the probability of infection in each node. We report on the existence of the solution for all times. Moreover we show that there exist two regions, given in terms of the coefficients of the model, one where the system goes to extinction almost surely, and the other where it is stochastic permanent.
|
85 |
Socially aware motion planning of assistive robots in crowded environmentsColombo, Alessio January 2015 (has links)
People with impaired physical or mental ability often find it challenging to negotiate crowded or unfamiliar environments, leading to a vicious cycle of deteriorating mobility and sociability. In particular, crowded environments pose a challenge to the comfort and safety of those people. To address this issue we present a novel two-level motion planning framework to be embedded efficiently in portable devices. At the top level, the long term planner deals with crowded areas, permanent or temporary anomalies in the environment (e.g., road blocks, wet floors), and hard and soft constraints (e.g., "keep a toilet within reach of 10 meters during the journey", "always avoid stairs"). A priority tailored on the user's needs can also be assigned to the constraints. At the bottom level, the short term planner anticipates undesirable circumstances in real time, by verifying simulation traces of local crowd dynamics against temporal logical formulae. The model takes into account the objectives of the user, preexisting knowledge of the environment and real time sensor data. The algorithm is thus able to suggest a course of action to achieve the user’s changing goals, while minimising the probability of problems for the user and other people in the environment. An accurate model of human behaviour is crucial when planning motion of a robotic platform in human environments. The Social Force Model (SFM) is such a model, having parameters that control both deterministic and stochastic elements. The short term planner embeds the SFM in a control loop that determines higher level objectives and reacts to environmental changes. Low level predictive modelling is provided by the SFM fed by sensors; high level logic is provided by statistical model checking. To parametrise and improve the short term planner, we have conducted experiments to consider typical human interactions in crowded environments. We have identified a number of behavioural patterns which may be explicitly incorporated in the SFM to enhance its predictive power. To validate our hierarchical motion planner we have run simulations and experiments with elderly people within the context of the DALi European project. The performance of our implementation demonstrates that our technology can be successfully embedded in a portable device or robot.
|
86 |
The influence of the inclusion of biological knowledge in statistical methods to integrate multi-omics dataTini, Giulia January 2018 (has links)
Understanding the relationships among biomolecules and how these relationships change between healthy and disease states is an important question in modern biology and medicine. The advances in high-throughput techniques has led to the explosion of biological data available for analysis, allowing researchers to investigate multiple molecular layers (i.e. omics data) together. The classical statistical methods could not address the challenges of combining multiple data types, leading to the development of ad hoc methodologies, which however depend on several factors. Among those, it is important to consider whether “prior knowledge” on the inter-omics relationships is available for integration. To address this issue, we thus focused on different approaches to perform three-omics integration: supervised (prior knowledge is available), unsupervised and semi-supervised. With the supervised integration of DNA methylation, gene expression and protein levels from adipocytes we observed coordinated significant changes across the three omics in the last phase of adipogenesis. However, in most cases, interactions between different molecular layers are complex and unknown: we explored unsupervised integration methods, showing that their results are influenced by method choice, pre-processing, number of integrated data types and experimental design. The strength of the inter-omics signal and the presence of noise are also proven as relevant factors. Since the inclusion of prior knowledge can highlight the former while decreasing the influence of the latter, we proposed a semi-supervised approach, showing that the inclusion of knowledge about inter-omics interactions increases the accuracy of unsupervised methods when solving the problem of sample classification.
|
87 |
Novel data-driven analysis methods for real-time fMRI and simultaneous EEG-fMRI neuroimagingSoldati, Nicola January 2012 (has links)
Real-time neuroscience can be described as the use of neuroimaging techniques to extract and evaluate brain activations during their ongoing development. The possibility to track these activations opens the doors to new research modalities as well as practical applications in both clinical and everyday life. Moreover, the combination of different neuroimaging techniques, i.e. multimodality, may reduce several limitations present in each single technique. Due to the intrinsic difficulties of real-time experiments, in order to fully exploit their potentialities, advanced signal processing algorithms are needed. In particular, since brain activations are free to evolve in an unpredictable way, data-driven algorithms have the potentials of being more suitable than model-driven ones. In fact, for example, in neurofeedback experiments brain activation tends to change its properties due to training or task eects thus evidencing the need for adaptive algorithms. Blind Source Separation (BSS) methods, and in particular Independent Component Analysis (ICA) algorithms, are naturally suitable to such kind of conditions. Nonetheless, their applicability in this framework needs further investigations. The goals of the present thesis are: i) to develop a working real-time set up for performing experiments; ii) to investigate different state of the art ICA algorithms with the aim of identifying the most suitable (along with their optimal parameters), to be adopted in a real-time MRI environment; iii) to investigate novel ICA-based methods for performing real-time MRI neuroimaging; iv) to investigate novel methods to perform data fusion between EEG and fMRI data acquired simultaneously. The core of this thesis is organized around four "experiments", each one addressing one of these specic aims. The main results can be summarized as follows. Experiment 1: a data analysis software has been implemented along with the hardware acquisition set-up for performing real-time fMRI. The set-up has been developed with the aim of having a framework into which it would be possible to test and run the novel methods proposed to perform real-time fMRI. Experiment 2: to select the more suitable ICA algorithm to be implemented in the system, we investigated theoretically and compared empirically the performance of 14 different ICA algorithms systematically sampling different growing window lengths, model order as well as a priori conditions (none, spatial or temporal). Performance is evaluated by computing the spatial and temporal correlation to a target component of brain activation as well as computation time. Four algorithms are identied as best performing without prior information (constrained ICA, fastICA, jade-opac and evd), with their corresponding parameter choices. Both spatial and temporal priors are found to almost double the similarity to the target at not computation costs for the constrained ICA method. Experiment 3: the results and the suggested parameters choices from experiment 2 were implemented to monitor ongoing activity in a sliding-window approach to investigate different ways in which ICA-derived a priori information could be used to monitor a target independent component: i) back-projection of constant spatial information derived from a functional localizer, ii) dynamic use of temporal , iii) spatial, or both iv) spatial-temporal ICA constrained data. The methods were evaluated based on spatial and/or temporal correlation with the target IC component monitored, computation time and intrinsic stochastic variability of the algorithms. The results show that the back-projection method offers the highest performance both in terms of time course reconstruction and speed. This method is very fast and effective as far as the monitored IC has a strong and well defined behavior, since it relies on an accurate description of the spatial behavior. The dynamic methods oer comparable performances at cost of higher computational time. In particular the spatio-temporal method performs comparably in terms of computational time to back-projection, offering more variable performances in terms of reconstruction of spatial maps and time courses. Experiment 4: finally, Higher Order Partial Least Square based method combined with ICA is proposed and investigated to integrate EEG-fMRI data acquired simultaneously. This method showed to be promising, although more experiments are needed.
|
88 |
Economics of Privacy: Users’ Attitudes and Economic Impact of Information Privacy ProtectionFrik, Alisa January 2017 (has links)
This doctoral thesis consists of three essays within the field of economics of information privacy examined through the lens of behavioral and experimental economics. Rapid development and expansion of Internet, mobile and network technologies in the last decades has provided multitudinous opportunities and benefits to both business and society proposing the customized services and personalized offers at a relatively low price and high speed. However, such innovations and progress have also created complex and hazardous issues. One of the main problems is related to the management of extensive flows of information, containing terabytes of personal data. Collection, storage, analysis, and sharing of this information imply risks and trigger usersâ concerns that range from nearly harmless to significantly pernicious, including tracking of online behavior and location, intrusive or unsolicited marketing, price discrimination, surveillance, hacking attacks, fraud, and identity theft. Some users ignore these issues or at least do not take an action to protect their online privacy. Others try to limit their activity in Internet, which in turn may inhibit the online shopping acceptance. Yet another group of users gathers personal information protection, for example, by deploying the privacy-enhancing technologies, e.g., ad-blockers, e-mail encryption, etc. The ad-blockers sometimes reduce the revenue of online publishers, which provide the content to their users for free and do not receive the income from advertisers in case the user has blocked ads. The economics of privacy studies the trade-offs related to the positive and negative economic consequences of personal information use by data subjects and its protection by data holders and aims at balancing the interests of both parties optimising the expected utilities of various stakeholders. As technology is penetrating every aspect of human life raising numerous privacy issues and affecting a large number of interested parties, including business, policy-makers, and legislative regulators, the outcome of this research is expected to have a great impact on individual economic markets, consumers, and society as a whole. The first essay provides an extensive literature review and combines the theoretical and empirical evidence on the impact of advertising in both traditional and digital media in order to gain the insights about the effects of ad-blocking privacy-enhancing technologies on consumersâ welfare. It first studies the views of the main schools of advertising, informative and persuasive. The informative school of advertising emphasizes the positive effects of advertising on sales, competition, product quality, and consumersâ utility and satisfaction by matching buyers to sellers, informing the potential customers about available goods and enhancing their informed purchasing decisions. In contrast, the advocates of persuasive school view advertising as a generator of irrational brand loyalty that distorts consumersâ preferences, inflates product prices, and creates entry barriers. I pay special attention to the targeted advertising, which is typically assumed to have a positive impact on consumersâ welfare if it does not cause the decrease of product quality and does not involve the extraction of consumersâ surplus through the exploitation of reservation price for discriminating activities. Moreover, the utility of personalized advertising appears to be a function of its accuracy: the more relevant is a targeted offer, the more valuable it is for the customer. I then review the effects of online advertising on the main stakeholders and users and show that the low cost of online advertising leads to excessive advertising volumes causing information overload, psychological discomfort and reactance, privacy concerns, decreased exploration activities and opinion diversity, and market inefficiency. Finally, as ad-blocking technologies filter advertising content and limit advertising exposure, I analyze the consequences of ad-blocking deployment through the lens of the models on advertising restrictions. The control of advertising volume and its partial restriction would benefit both consumers and businesses more than a complete ban of advertising. For example, advertising exposure caps, which limit the number of times that the same ad is to be shown to a particular user, general reduction of the advertising slots, control of the advertising quality standards, and limitation of tracking would result in a better market equilibrium than can offer an arms race of ad-blockers and anti-ad-blockers. Finally, I review the solutions alternative to the blocking of advertising content, which include self regulation, non-intrusive ads programs, paywall, intention economy approach that promotes business models, in which user initiates the trade and not the marketer, and active social movements aimed at increasing social awareness and consumer education. The second essay describes a model of factors affecting Internet usersâ perceptions of websitesâ trustworthiness with respect to their privacy and the intentions to purchase from such websites. Using focus group method I calibrate a list of websitesâ attributes that represent those factors. Then I run an online survey with 117 adult participants to validate the research model. I find that privacy (including awareness, information collection and control practices), security, and reputation (including background and feedback) have strong effect on trust and willingness to buy, while website quality plays a marginal role. Although generally trustworthiness perceptions and purchase intentions are positively correlated, in some cases participants are likely to purchase from the websites that they have judged as untrustworthy. I discuss how behavioral biases and decision-making heuristics may explain this discrepancy between perceptions and behavioral intentions. Finally, I analyze and suggest what factors, particular websitesâ attributes, and individual characteristics have the strongest effect on hindering or advancing customersâ trust and willingness to buy. In the third essay I investigate the decision of experimental subjects to incur the risk of revealing personal information to other participants. I do so by using a novel method to generate personal information that reliably induces privacy concerns in the laboratory. I show that individual decisions to incur privacy risk are correlated with decisions to incur monetary risk. I find that partially depriving subjects of control over the revelation of their personal information does not lead them to lose interest in protecting it. I also find that making subjects think of privacy decisions after financial decisions reduces their aversion to privacy risk. Finally, surveyed attitude to privacy and explicit willingness to pay or to accept payments for personal information correlate with willingness to incur privacy risk. Having shown that privacy loss can be assimilated to a monetary loss, I compare decisions to incur risk in privacy lotteries with risk attitude in monetary lotteries to derive estimates of the implicit monetary value of privacy. The average implicit monetary value of privacy is about equal to the average willingness to pay to protect private information, but the two measures do not correlate at the individual level. I conclude by underlining the need to know individual attitudes to risk to properly evaluate individual attitudes to privacy as such.
|
89 |
Theoretical and Algorithmic Solutions for Null models in Network TheoryGobbi, Andrea January 2013 (has links)
The graph-theoretical based formulation for the representation of the data-driven structure and the dynamics of complex systems is rapidly imposing as the paramount paradigm [1] across a variety of disciplines, from economics to neuroscience, with biological -omics as a major example. In this framework, the concept of Null Model
borrowed from the statistical sciences identifies the elective strategy to obtain a baseline points of modelling comparison [2]. Hereafter, a null model is a graph
which matches one specific graph in terms of some structural features, but which is otherwise taken to be generated as an instance of a random network. In this view, the network model introduced by Erdos & Renyi [3], where random edges are generated as independently and identically distributed Bernoulli trials, can be considered the simplest possible null model. In the following years,
other null models have been developed in the framework of graph theory, with the detection of the community structure as one of the most important target[4]. In particular, the model described in [5] introduces the concept of a randomized version of the original graph: edges are rewired at random, with each expected vertex degree matching the degree of the vertex in the original graph. Although aimed
at building a reference for the community detection, this approach will play a key role in one of the model considered in this thesis. Note that, although being the
ï¬ rst problem to be considered, designing null models for the community structures detection is still an open problem [6, 7]. Real world applications of null model in graph theory have also gained popularity in many different scientific areas, with ecology as the ï¬ rst example: see
[8] for a comprehensive overview. More recently, interest for network null models arose also in computational biology [9, 10], geosciences [11] and economics [12, 13],
just to name a few. In the present work the theoretical design and the practical implementation of
a series of algorithms for the construction of null models will be introduced, with applications ranging from functional genomics to game theory for social studies.
The four chapters devoted to the presentation of the examples of null model are preceded by an introductory chapter including a quick overview of graph theory,
together with all the required notations.
The ï¬ rst null model is the topic of the second chapter, where a suite of novel algorithms is shown, aimed at the efficient generation of complex networks under
different constraints on the node degrees. Although not the most important example in the thesis, the premiment position dedicated to this topic is due to its strict familiarity with the aforementioned classical null models for random graph construction. Together with the algorithms definition and examples, a thorough
theoretical analysis of the proposed solutions is shown, highlighting the improvements with respect to the state-of-the-art and the occurring limitations. Apart from its intrinsic mathematical value, the interest for these algorithms by the community of systems biology lies in the need for benchmark graphs resembling the real biological networks. They are in fact of uttermost importance when testing novel inference methods, and as testbeds for the network reconstruction challenges such as the DREAM series [14, 15, 16].
The following Chapter three includes the most complex application of null models presented in this thesis. The scientific workï¬ eld is again functional genomics,
namely the combinatorial approach to the modelling of patterns of mutations in cancer as detected by Next Generation Sequencing exome Data. This problem has
a natural mathematical representation in terms of rewiring of bipartite networks and mutual-exclusively mutated modules [17, 18], to which Markov chain updates
(switching-steps) are applied through a Switching Algorithm SA. Here we show some crucial improvements to the SA, we analytically derive an approximate lower
bound for the number of steps required, we introduce BiRewire, an R package implementing the improved SA and we demonstrate the effectiveness of the novel
solution on a breast cancer dataset. A novel threshold-selection method for the construction of co-expression net-
works based on the Pearson coefficient is the third and last biological example of null model, and it is outlined in Chapter four. Gene co-expression networks inferred by correlation from high-throughput proï¬ ling such as microarray data represent a simple but effective technique for discovering and interpreting linear gene relationships. In the last years several approach have been proposed to tackle the problem of deciding when the resulting correlation values are statistically significant. This is mostly crucial when the number of samples is small, yielding a non negligible chance that even high correlation values are due to random effects. Here we introduce a novel hard thresholding solution based on the assumption
that a coexpression network inferred by randomly generated data is expected to be empty. The theoretical derivation of the new bound by geometrical methods is shown together with two applications in oncogenomics. The last two chapters of the thesis are devoted to the presentation of null
models in non-biological contexts. In Chapter 5 a novel dynamic simulation model is introduced mimicking a random market in which sellers and buyers follow different price distributions and matching functions. The random marked is mathematically formulated by a dynamic bipartite graph, and the analytical formula for the evolution along time of the mean price exchange is derived, together with global likelihood function for retrieving the initial parameters under different assumptions. Finally in Chapter 6 we describe how graph tools can be used to model abstraction and strategy (see [19, 20, 21]) for a class of games in particular the TTT solitaire. We show that in this solitaire it is not possible to build an optimal (in
the sense of minimum number of moves) strategy dividing the big problems into smaller subproblems. Nevertheless, we ï¬ nd some subproblems and strategies for solving the TTT solitaire with a negligible increment in the number of moves. Although quite simple and far from simulating highly complex real-world situations of decision making, the TTT solitaire is an important tool for starting the exploration of the social analysis of the trajectories of the implementation of winning strategies through different learning procedures [22].
|
90 |
Measuring Productivity and Technological Progress: Development of a Constructive Method Based on Classical Economics and Input-Output TablesDegasperi, Matteo January 2010 (has links)
The present work is organized in five chapters and it proposes and applies alternative measures of productivity constructed using input-output tables and based mainly on the Sraffian scheme.
The first three chapters of the thesis are devoted to the development and the empirical application of new productivity measures. These chapters form the main part of the work. The last two chapters are devoted to sensitivity analysis.
In the first chapter, entitled †̃Productivity accounting based on production pricesâ€TM an alternative method of productivity accounting is proposed. By using input–output tables from four major OECD countries between 1970 and 2000, we compute the associated wage-profit frontiers and the net national products curves, and from these we derive two measures of productivity growth based on production prices and a chosen numeraire. The findings support the general conclusions in the existing literature on the productivity slowdown and later rebound, and supply new important insights to the extent and timing of these events.
The second chapter is entitled †̃New measures of sectoral productivityâ€TM. The objective of this chapter is to propose alternative methods of sectoral productivity accounting based the theoretical work of Goodwin (1976), Gossling (1972), Pasinetti (1973), and Sraffa (1960). The indexes developed in this study differ from the standard indexes of productivity because they are designed on the basis of some of the following desiderable features: take into account the interconnections among economic sectors, aggregate heterogeneous goods by using production prices, and compute productivity by using quantity of goods instead of their values. These indexes are then be tested empirically by computing productivity of four major OECD countries.
The third chapter is entitled †̃Productivity in the Italian regions: development of Alternative Indicators based on input-output tablesâ€TM. This chapter calculates indices of aggregate productivity, sectoral productivity, and technological progress for a selected sample of Italian regions. Besides these indices, two different versions of the so-called technological frontier were calculated. The contemporary frontiers that are constructed from all the production techniques extracted from the regional input-output tables in a given year and the intertemporal frontier that is computed for the full set of techniques available over time and across regions. The availability of the technological frontiers allows the calculation of the recently developed Velupillai-Fredholm-Zambelli indices of convergence (Fredholm and Zambelli, 2009) that are based on the distance between the region-specific wage-profit frontiers and the technological frontiers. Given the important role played by the production prices, this chapter also examines the price curves for each region and industry and it identifies remarkable regularities.
Not surprisingly, analyses of the findings reveal that there is a productivity gap between the regions of North and South. However, the analysis of sectoral productivity reveals two important facts. The first is that the techniques of some industries are more productive in the South than in the North. The second, who follows from the first, is that all regions could therefore improve productivity through greater integration.
Chapter four is entitled †̃An Inquiry into the choice of Numeraireâ€TM. This chapter has several objectives. The main aim is to examine the robustness of the results obtained by applying the new approach to measuring productivity if we change the numeraire chosen. However, it should be mentioned that the problem of the choice of numeraire is a general one and for this reason, the chapter also proposes universal guidelines to be followed in choosing the numeraire and in testing the robustness of the results to changes in the numeraire.
Finally, chapter five is entitled †̃An Inquiry into the effect of aggregation of input-output tablesâ€TM. The aim of this chapter is to test the robustness of the results from a progressive aggregation of the input-output tables.
|
Page generated in 0.0938 seconds