• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 311
  • 67
  • 48
  • 32
  • 31
  • 18
  • 16
  • 14
  • 14
  • 9
  • 4
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 710
  • 710
  • 375
  • 375
  • 154
  • 153
  • 106
  • 80
  • 69
  • 69
  • 66
  • 66
  • 64
  • 63
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Elförbrukningen i svenska hushåll : En analys inom projektet ”Förbättrad energistatistik i bebyggelsen” för Energimyndigheten / Electricity consumption in Swedish households : An analysis in the project “Improved energy statistics for settlements” for the Swedish Energy Agency

Nilsson, Josefine, Xie, Jing January 2012 (has links)
Energimyndigheten har drivit ett projekt kallat ”Förbättrad energistatistik i bebyggelsen” för att få mer kunskap om energianvändningen i byggnader.  Denna rapport fokuserar på ”Mätning av hushållsel på apparatnivå” som var ett delprojekt. Diverse regressionsmodeller används i denna rapport för att undersöka sambandet mellan elanvändningen och de olika förklarande variablerna, som exempelvis hushållens bakgrundsvariabler, hushållstyp och geografiska läge, elförbrukningen av olika elapparater samt antalet elapparater. Datamaterialet innefattar 389 hushåll där de flesta är spridda runt om i Mälardalen. Ett fåtal mätningar gjordes på hushåll i Kiruna och Malmö. Slutsatsen vi kan dra från denna uppsats är att hushållens bakgrund, hustyp, geografiska läge och antal elapparater samt dessa apparaters typ har relevans för elförbrukningen i ett hushåll. / The Swedish Energy Agency conducted a project which is called “Improved energy statistics for settlements”. This report focuses on one field of the project: “households’ electricity use on device level”. Various regression models are used in the analysis to analyze the relationship between electricity usage and different explanatory variables, for instance: background variables for the household, type of household, geographical setting, usage of different electrical devices and quantity of electrical devices used.  The data material consists of 389 households which are spread around the region of Märlardalen except for a few households from the communities of Kiruna and Malmö. The conclusion we can draw from this thesis shows that the background variables for a household, its type, its geographical setting and the amount and type of devices it contains all have a contribution to the electricity usage in the household. / Förbättrad energistatistik i bebyggelsen
282

Risken för kolorektal cancer i förhållande till kostmönster, fysisk aktivitet och BMI i sydöstra Sverige : Analys av data från en fall-kontrollstudie / The risk of colorectal cancer in relation to dietary patterns, physical activity and BMI in southeastern Sweden

Wilzén, Josef, Lee, Emma January 2011 (has links)
Bakgrund: Tidigare studier har identifierat flera riskfaktorer, såsom kost, fysisk aktivitet och BMI, gällande kolorektal cancer. Att analysera kost utifrån kostmönster istället för enskilda livsmedel har visat sig vara effektivt för att undersöka risker för kolorektal cancer. Datamaterial samlades in med hjälp av en fall-kontroll studie med 257 fall och 805 kontroller. Syfte: Identifiera faktorer som ger en höjd eller sänkt risk för kolorektal cancer utifrån områdena kost, fysisk aktivitet och BMI. Metod: Faktoranalys användes för att upptäcka kostmönster. Logistisk regression användes för att skatta oddskvoter och 95 % konfidensintervall. Resultat: Tio stycken kostmönster erhölls från faktoranalysen. Kostmönstren ”Läsk, juice och mjölkprodukter” (OR=1,288; ORQ4=2,159), ”Te, men inte kaffe”(OR=1,228; ORQ3=1,891; ORQ4=1,668) och ”Fågel, rött kött och fisk”( ORQ4=1,724) gav alla en ökad risk. Däremot visade kostmönstret ”Mat från säd och ost”( ORQ2=0,546; ORQ4=0,592) en minskad risk. BMI för tio år sedan (OR=1,079; ORÖvervikt=1,491; ORFetma=2,260) identifierades som en riskfaktor. Att arbeta inom stillasittande (OR=0,975; OR>15 år=0,517) och mellanaktiva (OR=0,977; OR6-10 år=0,497;OR>15 år=0,565) yrken visade på en minskad risk. Slutsats: Flera kostmönster visade sig vara riskfaktorer, detta gäller även BMI för tio år sedan. Kostmönstret ”Mat från säd och ost” och att arbeta i fysiskt lätta till medeltunga yrken visade sig vara skyddande faktorer. / Background: Previous studies have shown several risk factors for developing colorectal cancer such as diet, physical activity and BMI. The method of analyzing diets based on dietary patterns, rather than individual food items, have been shown to be effective when investigating the colorectal cancer risk. The data was collected using a case-control study of 257 cases and 805 controls. Aim: Identify factors that cause increased or decreased risk in developing colorectal cancer based on diet, physical activity and BMI. Methods: Factor analysis was conducted to identify dietary patterns. Logistic regression was used to estimate odds ratio and 95 % confidence interval. Results: Factor analysis conducted ten dietary patterns, three of these patterns showed an increased risk “Soft drinks, juice and milk products” (OR=1,288; ORQ4=2,159), “Tea, but not coffee” (OR=1,228; ORQ3=1,891; ORQ4=1,668) and “Poultry, red meats and fish” (ORQ4=1,724).The dietary pattern “Food based on grain and cheese” (ORQ2=0,546; ORQ4=0,592) showed a decreased risk. BMI ten years ago (OR=1,079; OROverweight=1,491; ORObese=2,260) identified as a risk factor. To work in sedentary (OR=0,975; OR>15 years=0,517) or physically medium heavy (OR=0,977; OR6-10 years=0,497; OR>15 years=0,565) occupations indicated a decreased risk. Conclusions: Several dietary patterns has been identified as risk factors, this also includes BMI ten years ago. The dietary pattern “Food based on grain and cheese” and to work in sedentary or physically medium heavy occupations proved to be protective factors.
283

Optimal Control Problems In Communication Networks With Information Delays And Quality Of Service Constraints

Kuri, Joy 02 1900 (has links)
In this thesis, we consider optimal control problems arising in high-speed integrated communication networks with Quality of Service (QOS) constraints. Integrated networks are expected to carry a large variety of traffic sources with widely varying traffic characteristics and performance requirements. Broadly, the traffic sources fall into two categories: (a) real-time sources with specified performance criteria, like small end to end delay and loss probability (sources of this type are referred to as Type 1 sources below), and (b) sources that do not have stringent performance criteria and do not demand performance guarantees from the network - the so-called Best Effort Type sources (these are referred to as Type 2 sources below). From the network's point of view, Type 2 sources are much more "controllable" than Type 1 sources, in the sense that the Type 2 sources can be dynamically slowed down, stopped or speeded up depending on traffic congestion in the network, while for Type 1 sources, the only control action available in case of congestion is packet dropping. Carrying sources of both types in the same network concurrently while meeting the performance objectives of Type 1 sources is a challenge and raises the question of equitable sharing of resources. The objective is to carry as much Type 2 traffic as possible without sacrificing the performance requirements of Type 1 traffic. We consider simple models that capture this situation. Consider a network node through which two connections pass, one each of Types 1 and 2. One would like to maximize the throughput of the Type 2 connection while ensuring that the Type 1 connection's performance objectives are met. This can be set up as a constrained optimization problem that, however, is very hard to solve. We introduce a parameter b that represents the "cost" of buffer occupancy by Type 2 traffic. Since buffer space is limited and shared, a queued Type 2 packet means that a buffer position is not available for storing a Type 1 packet; to discourage the Type 2 connection from hogging the buffer, the cost parameter b is introduced, while a reward for each Type 2 packet coming into the buffer encourages the Type 2 connection to transmit at a high rate. Using standard on-off models for the Type 1 sources, we show how values can be assigned to the parameter b; the value depends on the characteristics of the Type 1 connection passing through the node, i.e., whether it is a Variable Bit Rate (VBR) video connection or a Continuous Bit Rate (CBR) connection etc. Our approach gives concrete networking significance to the parameter b, which has long been considered as an abstract parameter in reward-penalty formulations of flow control problems (for example, [Stidham '85]). Having seen how to assign values to b, we focus on the Type 2 connection next. Since Type 2 connections do not have strict performance requirements, it is possible to defer transmitting a Type 2 packet, if the conditions downstream so warrant. This leads to the question: what is the "best" transmission policy for Type 2 packets? Decisions to transmit or not must be based on congestion conditions downstream; however, the network state that is available at any instant gives information that is old, since feedback latency is an inherent feature of high speed networks. Thus the problem is to identify the best transmission policy under delayed feedback information. We study this problem in the framework of Markov Decision Theory. With appropriate assumptions on the arrivals, service times and scheduling discipline at a network node, we formulate our problem as a Partially Observable Controlled Markov Chain (PO-CMC). We then give an equivalent formulation of the problem in terms of a Completely Observable Controlled Markov Chain (CO-CMC) that is easier to deal with., Using Dynamic Programming and Value Iteration, we identify structural properties of an optimal transmission policy when the delay in obtaining feedback information is one time slot. For both discounted and average cost criteria, we show that the optimal policy has a two-threshold structure, with the threshold on the observed queue length depending, on whether a Type 2 packet was transmitted in the last slot or not. For an observation delay k > 2, the Value Iteration technique does not yield results. We use the structure of the problem to provide computable upper and lower bounds to the optimal value function. A study of these bounds yields information about the structure of the optimal policy for this problem. We show that for appropriate values of the parameters of the problem, depending on the number of transmissions in the last k steps, there is an "upper cut off" number which is a value such that if the observed queue length is greater than or equal to this number, the optimal action is to not transmit. Since the number of transmissions in the last k steps is between 0 and A: both inclusive, we have a stack of (k+1) upper cut off values. We conjecture that these (k + l) values axe thresholds and the optimal policy for this problem has a (k + l)-threshold structure. So far it has been assumed that the parameters of the problem are known at the transmission control point. In reality, this is usually not known and changes over time. Thus, one needs an adaptive transmission policy that keeps track of and adjusts to changing network conditions. We show that the information structure in our problem admits a simple adaptive policy that performs reasonably well in a quasi-static traffic environment. Up to this point, the models we have studied correspond to a single hop in a virtual connection. We consider the multiple hop problem next. A basic matter of interest here is whether one should have end to end or hop by hop controls. We develop a sample path approach to answer this question. It turns out that depending on the relative values of the b parameter in the transmitting node and its downstream neighbour, sometimes end to end controls are preferable while at other times hop by hop controls are preferable. Finally, we consider a routing problem in a high speed network where feedback information is delayed, as usual. As before, we formulate the problem in the framework of Markov Decision Theory and apply Value Iteration to deduce structural properties of an optimal control policy. We show that for both discounted and average cost criteria, the optimal policy for an observation delay of one slot is Join the Shortest Expected Queue (JSEQ) - a natural and intuitively satisfactory extension of the well-known Join the Shortest Queue (JSQ) policy that is optimal when there is no feedback delay (see, for example, [Weber 78]). However, for an observation delay of more than one slot, we show that the JSEQ policy is not optimal. Determining the structure of the optimal policy for a delay k>2 appears to be very difficult using the Value Iteration approach; we explore some likely policies by simulation.
284

Statistical potentials for evolutionary studies

Kleinman, Claudia L. 06 1900 (has links)
Les séquences protéiques naturelles sont le résultat net de l’interaction entre les mécanismes de mutation, de sélection naturelle et de dérive stochastique au cours des temps évolutifs. Les modèles probabilistes d’évolution moléculaire qui tiennent compte de ces différents facteurs ont été substantiellement améliorés au cours des dernières années. En particulier, ont été proposés des modèles incorporant explicitement la structure des protéines et les interdépendances entre sites, ainsi que les outils statistiques pour évaluer la performance de ces modèles. Toutefois, en dépit des avancées significatives dans cette direction, seules des représentations très simplifiées de la structure protéique ont été utilisées jusqu’à présent. Dans ce contexte, le sujet général de cette thèse est la modélisation de la structure tridimensionnelle des protéines, en tenant compte des limitations pratiques imposées par l’utilisation de méthodes phylogénétiques très gourmandes en temps de calcul. Dans un premier temps, une méthode statistique générale est présentée, visant à optimiser les paramètres d’un potentiel statistique (qui est une pseudo-énergie mesurant la compatibilité séquence-structure). La forme fonctionnelle du potentiel est par la suite raffinée, en augmentant le niveau de détails dans la description structurale sans alourdir les coûts computationnels. Plusieurs éléments structuraux sont explorés : interactions entre pairs de résidus, accessibilité au solvant, conformation de la chaîne principale et flexibilité. Les potentiels sont ensuite inclus dans un modèle d’évolution et leur performance est évaluée en termes d’ajustement statistique à des données réelles, et contrastée avec des modèles d’évolution standards. Finalement, le nouveau modèle structurellement contraint ainsi obtenu est utilisé pour mieux comprendre les relations entre niveau d’expression des gènes et sélection et conservation de leur séquence protéique. / Protein sequences are the net result of the interplay of mutation, natural selection and stochastic variation. Probabilistic models of molecular evolution accounting for these processes have been substantially improved over the last years. In particular, models that explicitly incorporate protein structure and site interdependencies have recently been developed, as well as statistical tools for assessing their performance. Despite major advances in this direction, only simple representations of protein structure have been used so far. In this context, the main theme of this dissertation has been the modeling of three-dimensional protein structure for evolutionary studies, taking into account the limitations imposed by computationally demanding phylogenetic methods. First, a general statistical framework for optimizing the parameters of a statistical potential (an energy-like scoring system for sequence-structure compatibility) is presented. The functional form of the potential is then refined, increasing the detail of structural description without inflating computational costs. Always at the residue-level, several structural elements are investigated: pairwise distance interactions, solvent accessibility, backbone conformation and flexibility of the residues. The potentials are then included into an evolutionary model and their performance is assessed in terms of model fit, compared to standard evolutionary models. Finally, this new structurally constrained phylogenetic model is used to better understand the selective forces behind the differences in conservation found in genes of very different expression levels.
285

Research, design and development of SW tools for process management in the area of e-health : projection of future number of end-stage renal disease patients in Greece / Έρευνα, σχεδιασμός και ανάπτυξη εργαλείων λογισμικού για τη διαχείριση διαδικασιών στον τομέα της ηλεκτρονικής υγείας : πρόβλεψη μελλοντικού αριθμού ασθενών με τελικού σταδίου χρόνια νεφρική ανεπάρκεια στην Ελλάδα

Ροδινά-Θεοχαράκη, Αναστασία 03 July 2013 (has links)
End Stage Renal Disease (ESRD) is the irreversible loss of kidney function, which can be due to various causes. Its treatment is one of the most costly chronic disease treatments. There are now approximately one million people worldwide living with ESRD and this number is predicted to increase in the future. The main reasons for the increasing incidence of ESRD worldwide are population ageing, the rapid increase of diabetes mellitus reaching epidemic proportions, and changes in age limits for treatment initiation. In Greece, during the period 2005-2009, 74% of the ESRD patients were on hemodialysis (HD), 7% on peritoneal dialysis (PD) and 19% were living with a functioning graft. The latter percentage brings Greece in the 26th place out of 36 countries in prevalence of functioning grafts worldwide. Cost-effectiveness analyses of these treatments have shown that RTx is overall the least expensive, followed by PD, while centre HD is the most expensive. Moreover, these treatments are also listed in the exact same order concerning the quality of life they provide to patients. The main reasons for the low RTx rate in Greece are the lack of organ donation, largely due to inadequate information, the inefficient organ distribution system, a high number of private HD centers not interested in RTx, as well as social factors. The objective of the present work was to implement a model for the projection of the ESRD patients’ number by 2020 in Greece and investigate the impact of different scenarios of an increase in RTx. In addition, a cost-effectiveness analysis of the increase in RTx was performed. The projection was performed based on a Markov chain model. The Markov models are distinguished by their simplicity and their ability to accurately represent many clinical problems. A deterministic Markov chain model was implemented in order to predict the future number of prevalent ESRD patients in Greece. Monte Carlo techniques were applied in order to add robustness to the model. Thus two models of prediction were implemented, a Markov chain and a Markov Chain Monte Carlo (MCMC) model. Age-specific data (<45, 45-65, >65 age groups) on incident and prevalent ESRD patients’ number for Greece, available from the European Renal Association – European Dialysis and Transplant Association reports for the period 1998-2009, were used for the implementation. The basic component of the Markov chain is the transition matrix defining the probability for the patient to move between the four states, i.e. HD, PD, RTx and death. An iterative error minimization technique was used in defining the transition probabilities of the Markov chain, based on the data from 1998 to 2006. Both Markov chain and MCMC models were successfully validated based on data for the period from 2007 to 2009. In each model the ESRD incident patients’ number in Greece was predicted in a different way. For the Markov chain model three incidence rate scenarios were applied: low, medium and high. Additionally, two different approaches were proposed for the increase in RTx, one for each model. In the Markov chain model, two scenarios of RTx increase were applied on the number of prevalent patients. The first one was based on the assumption that the average number of transplants performed in Greece during the period 2005-2009 will double by 2020. The second one assumed that Greece will reach by 2020 the transplantation rate of Norway in 2009, the highest transplantation rate reported during that year worldwide. In the MCMC model, the increase of RTx was accomplished by increasing annually by 1% the number of incident patients receiving RTx and reducing accordingly the number of patients performing HD. The Markov chain model projected an increase in the number of prevalent patients’ in Greece by 19.3%, 24.4% and 42.2% in 2020 compared to 2009, depending on the incidence scenario applied. Similarly, the MCMC model projected a 25.0% prevalence increase. In the Markov chain model, the results of the increase in RTx indicated that in 2020 there will be a 64.6% (first scenario) or a 107.2% (second scenario) increase in the number of RTx patients compared to 2009, resulting in total saving of €50.2 and €112.37 million, respectively, for the period 2010-2020. Finally, the increase in RTx accomplished in the MCMC model indicated a 57.9% increase of patients living with a transplanted kidney, resulting in total saving of €68.2 million. The results of both models suggest that performing more kidney transplantations instead of HD would reduce the treatment costs for the country’s healthcare system, while at the same time it would improve the quality of life for a significant number of ESRD patients. / Η Τελικού Σταδίου Χρόνια Νεφρική Ανεπάρκεια (ΤΣΧΝΑ) είναι η μη αναστρέψιμη απώλεια της νεφρικής λειτουργίας, η οποία μπορεί να οφείλεται σε διάφορα αίτια. Η θεραπεία της είναι μία από τις πιο δαπανηρές όσον αφορά τις χρόνιες παθήσεις. Σήμερα, υπολογίζεται ότι περίπου ένα εκατομμύριο άνθρωποι παγκοσμίως ζουν με ΤΣΧΝΑ, ενώ ο αριθμός τους προβλέπεται να αυξηθεί στο μέλλον. Οι κύριοι παράγοντες αύξησης της επίπτωσης (δηλαδή του αριθμού των νεοεντασσόμενων ασθενών) της ΤΣΧΝΑ παγκοσμίως είναι η αύξηση της μέσης ηλικίας του πληθυσμού, η αλματώδης αύξηση του σακχαρώδους διαβήτη που λαμβάνει επιδημικές διαστάσεις, καθώς και οι αλλαγές στα ηλικιακά όρια για έναρξη θεραπείας. Στην Ελλάδα, κατά την περίοδο 2005-2009 το 74% ασθενών με ΤΣΧΝΑ έκανε αιμοκάθαρση, το 7% έκανε περιτοναϊκή κάθαρση και το 19% ζούσε με νεφρικό μόσχευμα. Το τελευταίο ποσοστό κατατάσσει την Ελλάδα στην 26η θέση ανάμεσα σε 36 χώρες παγκοσμίως όσον αφορά τον αριθμό των ασθενών που ζουν με μεταμοσχευμένο νεφρό. Η ανάλυση κόστους-αποτελεσματικότητας δείχνει πως η λιγότερο δαπανηρή θεραπεία της ΤΣΧΝΑ είναι η μεταμόσχευση, ακολουθούμενη από την περιτοναϊκή κάθαρση, ενώ η αιμοκάθαρση αναδεικνύεται ως η πιο δαπανηρή. Οι θεραπείες κατατάσσονται με την ίδια ακριβώς σειρά όσον αφορά και την ποιότητα ζωής που παρέχουν στους ασθενείς. Οι βασικές αιτίες για το χαμηλό ποσοστό μεταμοσχεύσεων στην Ελλάδα είναι η έλλειψη δωριζόμενων οργάνων, που οφείλεται κατά πολύ στην ελλιπή πληροφόρηση, η ανεπάρκεια του συστήματος διανομής οργάνων, ο υψηλός αριθμός ιδιωτικών κέντρων αιμοκάθαρσης, τα οποία δεν ενδιαφέρονται για μεταμοσχεύσεις, καθώς και κοινωνικοί παράγοντες. Ο σκοπός της παρούσας διατριβής ήταν η υλοποίηση ενός μοντέλου για την πρόβλεψη του αριθμού των ασθενών με ΤΣΧΝΑ στην Ελλάδα το 2020, καθώς επίσης και η διερεύνηση της επίδρασης διαφόρων σεναρίων αύξησης των μεταμοσχεύσεων. Επιπλέον, πραγματοποιήθηκε ανάλυση κόστους-αποτελεσματικότητας της αύξησης των μεταμοσχεύσεων. Η πρόβλεψη έγινε με βάση ένα μοντέλο Μαρκόφ. Τα μοντέλα Μαρκόφ διακρίνονται για την απλότητα αλλά και την ικανότητά τους να αναπαριστούν με ακρίβεια πολλά κλινικά προβλήματα. Για την παρούσα εργασία, υλοποιήθηκε ένα ντετερμινιστικό μοντέλο Μαρκόφ για την πρόβλεψη του μελλοντικού αριθμού των ασθενών με ΤΣΧΝΑ σε Θεραπεία Υποκατάστασης της Νεφρικής Λειτουργίας (ΘΥΝΛ) στην Ελλάδα. Επίσης, εφαρμόστηκαν τεχνικές Μόντε Κάρλο για μεγαλύτερη ευρωστία του μοντέλου. Ως εκ τούτου, υλοποιήθηκαν δύο διαφορετικά μοντέλα πρόβλεψης, ένα μοντέλο Μαρκόφ και ένα μοντέλο Μαρκόφ Μόντε Κάρλο. Για την υλοποίηση χρησιμοποιήθηκαν ηλικιακά δεδομένα (<45, 45-65, >65) του Ευρωπαϊκού Αρχείου Καταγραφής Νεφροπαθών (ERA-EDTA registry) που αφορούσαν νεοεντασσόμενους ασθενείς αλλά και ασθενείς που βρίσκονταν σε ΘΥΝΛ στην Ελλάδα την περίοδο 1998-2009. Ο σχεδιασμός μιας αλυσίδας Μαρκόφ βασίζεται στον πίνακα μετάβασης για υπολογισμό της πιθανότητας μετακίνησης του ασθενούς ανάμεσα στην Αιμοκάθαρση, την περιτοναϊκή κάθαρση, τη μεταμόσχευση και τον θάνατο. Για να υπολογιστούν οι πιθανότητες μετάβασης στην αλυσίδα Μαρκόφ, έγινε χρήση μιας επαναληπτικής τεχνικής μείωσης του σφάλματος με βάση τα ηλικιακά δεδομένα της περιόδου 1998-2006. Και τα δύο μοντέλα επαληθεύτηκαν επιτυχώς με βάση τα δεδομένα της περιόδου 2007-2009. Η πρόβλεψη του μελλοντικού αριθμού νεοεντασσόμενων ασθενών με ΤΣΧΝΑ στην Ελλάδα έγινε με διαφορετικό τρόπο σε κάθε μοντέλο. Στο μοντέλο Μαρκόφ, εφαρμόστηκαν τρία διαφορετικά σενάρια πρόβλεψης του ποσοστού επίπτωσης: χαμηλό, μεσαίο και υψηλό. Επιπλέον, σε κάθε μοντέλο ακολουθήθηκε διαφορετική προσέγγιση όσον αφορά την αύξηση του αριθμού των μεταμοσχεύσεων. Στο μοντέλο Μαρκόφ, εφαρμόστηκαν δύο σενάρια αύξησης των μεταμοσχεύσεων σε σχέση με τον αριθμό των ασθενών σε ΘΥΝΛ. Το πρώτο σενάριο βασίστηκε στην υπόθεση ότι ο μέσος αριθμός μεταμοσχεύσεων που έγιναν στην Ελλάδα κατά την περίοδο 2005-2009 θα διπλασιαστεί ως το 2020. Στο δεύτερο σενάριο θεωρήθηκε ότι η Ελλάδα θα φτάσει ως το 2020 το ποσοστό μεταμοσχεύσεων της Νορβηγίας το 2009, που ήταν το μεγαλύτερο παγκοσμίως για εκείνο το έτος. Στο μοντέλο Μαρκόφ Μόντε Κάρλο, η αύξηση του αριθμού των μεταμοσχεύσεων επιτεύχθηκε με ετήσια αύξηση κατά 1% του αριθμού των νεοεντασσόμενων ασθενών που θα έκαναν μεταμόσχευση, με αντίστοιχη μείωση του αριθμού των αιμοκαθαιρόμενων. Το μοντέλο Μαρκόφ προέβλεψε αύξηση του αριθμού των ασθενών σε ΘΥΝΛ στην Ελλάδα κατά 19.3%, 24.4% και 42.2% το 2020 σε σχέση με το 2009, ανάλογα με το εφαρμοζόμενο σενάριο επίπτωσης. Το μοντέλο Μαρκόφ Μόντε Κάρλο προέβλεψε αντίστοιχη αύξηση της τάξης του 25%. Στο μοντέλο Μαρκόφ, τα αποτελέσματα της αύξησης των μεταμοσχεύσεων έδειξαν ότι το 2020 θα υπάρξει αύξηση κατά 64.4% (πρώτο σενάριο) ή κατά 107.2% (δεύτερο σενάριο) του αριθμού των μεταμοσχευμένων ασθενών συγκριτικά με το 2009, με συνολική εξοικονόμηση 50.2 και 112.37 εκατομμύρια ευρώ αντίστοιχα, για την περίοδο 2010-2020. Τέλος, η αύξηση του αριθμού των μεταμοσχεύσεων στο μοντέλο Μαρκόφ Μόντε Κάρλο έδειξε αύξηση κατά 57.9% του αριθμού των ασθενών που ζουν με μεταμοσχευμένο νεφρό, με συνολική εξοικονόμηση 68.2 εκατομμύρια ευρώ. Τα αποτελέσματα και στα δύο μοντέλα καταδεικνύουν ότι η αύξηση του αριθμού των μεταμοσχεύσεων έναντι της αιμοκάθαρσης θα μείωνε το κόστος θεραπείας για το Σύστημα Υγείας της χώρας, ενώ ταυτοχρόνως θα βελτίωνε την ποιότητα ζωής για έναν σημαντικό αριθμό ασθενών με ΤΣΧΝΑ.
286

Chaînes de Markov cachées et séparation non supervisée de sources / Hidden Markov chains and unsupervised source separation

Rafi, Selwa 11 June 2012 (has links)
Le problème de la restauration est rencontré dans domaines très variés notamment en traitement de signal et de l'image. Il correspond à la récupération des données originales à partir de données observées. Dans le cas de données multidimensionnelles, la résolution de ce problème peut se faire par différentes approches selon la nature des données, l'opérateur de transformation et la présence ou non de bruit. Dans ce travail, nous avons traité ce problème, d'une part, dans le cas des données discrètes en présence de bruit. Dans ce cas, le problème de restauration est analogue à celui de la segmentation. Nous avons alors exploité les modélisations dites chaînes de Markov couples et triplets qui généralisent les chaînes de Markov cachées. L'intérêt de ces modèles réside en la possibilité de généraliser la méthode de calcul de la probabilité à posteriori, ce qui permet une segmentation bayésienne. Nous avons considéré ces méthodes pour des observations bi-dimensionnelles et nous avons appliqué les algorithmes pour une séparation sur des documents issus de manuscrits scannés dans lesquels les textes des deux faces d'une feuille se mélangeaient. D'autre part, nous avons attaqué le problème de la restauration dans un contexte de séparation aveugle de sources. Une méthode classique en séparation aveugle de sources, connue sous l'appellation "Analyse en Composantes Indépendantes" (ACI), nécessite l'hypothèse d'indépendance statistique des sources. Dans des situations réelles, cette hypothèse n'est pas toujours vérifiée. Par conséquent, nous avons étudié une extension du modèle ACI dans le cas où les sources peuvent être statistiquement dépendantes. Pour ce faire, nous avons introduit un processus latent qui gouverne la dépendance et/ou l'indépendance des sources. Le modèle que nous proposons combine un modèle de mélange linéaire instantané tel que celui donné par ACI et un modèle probabiliste sur les sources avec variables cachées. Dans ce cadre, nous montrons comment la technique d'Estimation Conditionnelle Itérative permet d'affaiblir l'hypothèse usuelle d'indépendance en une hypothèse d'indépendance conditionnelle / The restoration problem is usually encountered in various domains and in particular in signal and image processing. It consists in retrieving original data from a set of observed ones. For multidimensional data, the problem can be solved using different approaches depending on the data structure, the transformation system and the noise. In this work, we have first tackled the problem in the case of discrete data and noisy model. In this context, the problem is similar to a segmentation problem. We have exploited Pairwise and Triplet Markov chain models, which generalize Hidden Markov chain models. The interest of these models consist in the possibility to generalize the computation procedure of the posterior probability, allowing one to perform bayesian segmentation. We have considered these methods for two-dimensional signals and we have applied the algorithms to retrieve of old hand-written document which have been scanned and are subject to show through effect. In the second part of this work, we have considered the restoration problem as a blind source separation problem. The well-known "Independent Component Analysis" (ICA) method requires the assumption that the sources be statistically independent. In practice, this condition is not always verified. Consequently, we have studied an extension of the ICA model in the case where the sources are not necessarily independent. We have introduced a latent process which controls the dependence and/or independence of the sources. The model that we propose combines a linear instantaneous mixing model similar to the one of ICA model and a probabilistic model on the sources with hidden variables. In this context, we show how the usual independence assumption can be weakened using the technique of Iterative Conditional Estimation to a conditional independence assumption
287

Modèles statistiques avancés pour la segmentation non supervisée des images dégradées de l'iris / Advanced statistical models for unsupervised segmentation of degraded iris images

Yahiaoui, Meriem 11 July 2017 (has links)
L'iris est considérée comme une des modalités les plus robustes et les plus performantes en biométrie à cause de ses faibles taux d'erreurs. Ces performances ont été observées dans des situations contrôlées, qui imposent des contraintes lors de l'acquisition pour l'obtention d'images de bonne qualité. Relâcher ces contraintes, au moins partiellement, implique des dégradations de la qualité des images acquises et par conséquent une réduction des performances de ces systèmes. Une des principales solutions proposées dans la littérature pour remédier à ces limites est d'améliorer l'étape de segmentation de l'iris. L'objectif principal de ce travail de thèse a été de proposer des méthodes originales pour la segmentation des images dégradées de l'iris. Les chaînes de Markov ont été déjà proposées dans la littérature pour résoudre des problèmes de segmentation d'images. Dans ce cadre, une étude de faisabilité d'une segmentation non supervisée des images dégradées d'iris en régions par les chaînes de Markov a été réalisée, en vue d'une future application en temps réel. Différentes transformations de l'image et différentes méthodes de segmentation grossière pour l'initialisation des paramètres ont été étudiées et comparées. Les modélisations optimales ont été introduites dans un système de reconnaissance de l'iris (avec des images en niveaux de gris) afin de produire une comparaison avec les méthodes existantes. Finalement une extension de la modélisation basée sur les chaînes de Markov cachées, pour une segmentation non supervisée des images d'iris acquises en visible, a été mise en place / Iris is considered as one of the most robust and efficient modalities in biometrics because of its low error rates. These performances were observed in controlled situations, which impose constraints during the acquisition in order to have good quality images. The renouncement of these constraints, at least partially, implies degradations in the quality of the acquired images and it is therefore a degradation of these systems’ performances. One of the main proposed solutions in the literature to take into account these limits is to propose a robust approach for iris segmentation. The main objective of this thesis is to propose original methods for the segmentation of degraded images of the iris. Markov chains have been well solicited to solve image segmentation problems. In this context, a feasibility study of unsupervised segmentation into regions of degraded iris images by Markov chains was performed. Different image transformations and different segmentation methods for parameters initialization have been studied and compared. Optimal modeling has been inserted in iris recognition system (with grayscale images) to produce a comparison with the existing methods. Finally, an extension of the modeling based on the hidden Markov chains has been developed in order to realize an unsupervised segmentation of the iris images acquired in visible light
288

Bayesian Neural Networks for Financial Asset Forecasting / Bayesianska neurala nätverk för prediktion av finansiella tillgångar

Back, Alexander, Keith, William January 2019 (has links)
Neural networks are powerful tools for modelling complex non-linear mappings, but they often suffer from overfitting and provide no measures of uncertainty in their predictions. Bayesian techniques are proposed as a remedy to these problems, as these both regularize and provide an inherent measure of uncertainty from their posterior predictive distributions. By quantifying predictive uncertainty, we attempt to improve a systematic trading strategy by scaling positions with uncertainty. Exact Bayesian inference is often impossible, and approximate techniques must be used. For this task, this thesis compares dropout, variational inference and Markov chain Monte Carlo. We find that dropout and variational inference provide powerful regularization techniques, but their predictive uncertainties cannot improve a systematic trading strategy. Markov chain Monte Carlo provides powerful regularization as well as promising estimates of predictive uncertainty that are able to improve a systematic trading strategy. However, Markov chain Monte Carlo suffers from an extreme computational cost in the high-dimensional setting of neural networks. / Neurala nätverk är kraftfulla verktyg för att modellera komplexa icke-linjära avbildningar, men de lider ofta av överanpassning och tillhandahåller inga mått på osäkerhet i deras prediktioner. Bayesianska tekniker har föreslagits för att råda bot på dessa problem, eftersom att de både har en regulariserande effekt, samt har ett inneboende mått på osäkerhet genom den prediktiva posteriora fördelningen. Genom att kvantifiera prediktiv osäkerhet försöker vi förbättra en systematisk tradingstrategi genom att skala modellens positioner med den skattade osäkerheten. Exakt Bayesiansk inferens är oftast omöjligt, och approximativa metoder måste användas. För detta ändamål jämför detta examensarbete dropout, variational inference och Markov chain Monte Carlo. Resultaten indikerar att både dropout och variational inference är kraftfulla regulariseringstekniker, men att deras prediktiva osäkerheter inte kan användas för att förbättra en systematisk tradingstrategi. Markov chain Monte Carlo ger en kraftfull regulariserande effekt, samt lovande skattningar av osäkerhet som kan användas för att förbättra en systematisk tradingstrategi. Dock lider Markov chain Monte Carlo av en enorm beräkningsmässig komplexitet i ett så högdimensionellt problem som neurala nätverk.
289

The use of supercapacitors in conjunction with batteries in industrial auxiliary DC power systems / Ruan Pekelharing

Pekelharing, Ruan January 2015 (has links)
Control and monitoring networks often operate on AC/DC power systems. DC batteries and chargers are commonly used on industrial plants as auxiliary DC power systems for these control and monitoring networks. The energy demand and load profiles for these control networks differ from application to application. Proper design, sizing, and maintenance of the components that forms part of the DC control power system are therefore required. Throughout the load profile of a control and monitoring system there are various peak currents. The peak currents are classified as inrush and momentary loads. These inrush and momentary loads play a large role when calculating the required battery size for an application. This study investigates the feasibility of using supercapacitors in conjunction with batteries, in order to reduce the size of the required battery capacity. A reduction in the size of the required battery capacity not only influences the cost of the battery itself, but also influences the hydrogen emissions, the physical space requirements, and the required rectifiers and chargers. When calculating the required size batteries for an auxiliary power system, a defined load profile is required. Control and monitoring systems are used to control dynamic processes, which entails a continuous starting and stopping of equipment as the process demands. This starting and stopping of devices will cause fluctuations in the load profile. Ideally, data should be obtained from a live plant for the purpose of defining load profiles. Unfortunately, due to the economic risks involved, installing data logging equipment on a live industrial plant for the purpose of research, is not allowed. There are also no historical data available from which load profiles could be generated. In order to evaluate the influence of supercapacitors, complex load profiles are required. In this study, an alternative method of defining the load profile for a dynamic process is investigated. Load profiles for various applications are approximated using a probabilistic approach. The approximation methodology make use of plant operating philosophies as input to the Markov Chain Monte Carlo simulation theory. The required battery sizes for the approximated profiles are calculated using the IEEE recommended practice for sizing batteries. The approximated load profile, as well the calculated battery size are used for simulating the auxiliary power system. A supercapacitor is introduced into the circuit and the simulations are repeated. The introduction of the supercapacitor relieves the battery of the inrush and momentary loads of the load profile. The battery sizing calculations are repeated so as to test the influence of the supercapacitor on the required battery capacity. In order to investigate the full influence of adding a supercapacitor to the design, the impact on various factors are considered. In this study, these factors include the battery size, charger size, H2 extraction system, as well as maintenance requirements and the life of the battery. No major cost savings where evident from the results obtained. Primary reasons for this low cost saving are the fixed ranges in which battery sizes are available, as well as conservative battery data obtained from battery suppliers. It is believed that applications other than control and monitoring systems will show larger savings. / MIng (Computer and Electronic Engineering), North-West University, Potchefstroom Campus, 2015
290

The use of supercapacitors in conjunction with batteries in industrial auxiliary DC power systems / Ruan Pekelharing

Pekelharing, Ruan January 2015 (has links)
Control and monitoring networks often operate on AC/DC power systems. DC batteries and chargers are commonly used on industrial plants as auxiliary DC power systems for these control and monitoring networks. The energy demand and load profiles for these control networks differ from application to application. Proper design, sizing, and maintenance of the components that forms part of the DC control power system are therefore required. Throughout the load profile of a control and monitoring system there are various peak currents. The peak currents are classified as inrush and momentary loads. These inrush and momentary loads play a large role when calculating the required battery size for an application. This study investigates the feasibility of using supercapacitors in conjunction with batteries, in order to reduce the size of the required battery capacity. A reduction in the size of the required battery capacity not only influences the cost of the battery itself, but also influences the hydrogen emissions, the physical space requirements, and the required rectifiers and chargers. When calculating the required size batteries for an auxiliary power system, a defined load profile is required. Control and monitoring systems are used to control dynamic processes, which entails a continuous starting and stopping of equipment as the process demands. This starting and stopping of devices will cause fluctuations in the load profile. Ideally, data should be obtained from a live plant for the purpose of defining load profiles. Unfortunately, due to the economic risks involved, installing data logging equipment on a live industrial plant for the purpose of research, is not allowed. There are also no historical data available from which load profiles could be generated. In order to evaluate the influence of supercapacitors, complex load profiles are required. In this study, an alternative method of defining the load profile for a dynamic process is investigated. Load profiles for various applications are approximated using a probabilistic approach. The approximation methodology make use of plant operating philosophies as input to the Markov Chain Monte Carlo simulation theory. The required battery sizes for the approximated profiles are calculated using the IEEE recommended practice for sizing batteries. The approximated load profile, as well the calculated battery size are used for simulating the auxiliary power system. A supercapacitor is introduced into the circuit and the simulations are repeated. The introduction of the supercapacitor relieves the battery of the inrush and momentary loads of the load profile. The battery sizing calculations are repeated so as to test the influence of the supercapacitor on the required battery capacity. In order to investigate the full influence of adding a supercapacitor to the design, the impact on various factors are considered. In this study, these factors include the battery size, charger size, H2 extraction system, as well as maintenance requirements and the life of the battery. No major cost savings where evident from the results obtained. Primary reasons for this low cost saving are the fixed ranges in which battery sizes are available, as well as conservative battery data obtained from battery suppliers. It is believed that applications other than control and monitoring systems will show larger savings. / MIng (Computer and Electronic Engineering), North-West University, Potchefstroom Campus, 2015

Page generated in 0.0729 seconds