Spelling suggestions: "subject:"net"" "subject:"neto""
571 |
Test Modeling of Dynamic Variable Systems using Feature Petri NetsPüschel, Georg, Seidl, Christoph, Neufert, Mathias, Gorzel, André, Aßmann, Uwe 08 November 2013 (has links)
In order to generate substantial market impact, mobile applications must be able to run on multiple platforms. Hence, software engineers face a multitude of technologies and system versions resulting in static variability. Furthermore, due to the dependence on sensors and connectivity, mobile software has to adapt its behavior accordingly at runtime resulting in dynamic variability. However, software engineers need to assure quality of a mobile application even with this large amount of variability—in our approach by the use of model-based testing (i.e., the generation of test cases from models). Recent concepts of test metamodels cannot efficiently handle dynamic variability. To overcome this problem, we propose a process for creating black-box test models based on dynamic feature Petri nets, which allow the description of configuration-dependent behavior and reconfiguration. We use feature models to define variability in the system under test. Furthermore, we illustrate our approach by introducing an example translator application.
|
572 |
Dynamic system safety analysis in HiP-HOPS with Petri Nets and Bayesian NetworksKabir, Sohag, Walker, M., Papadopoulos, Y. 18 October 2019 (has links)
Yes / Dynamic systems exhibit time-dependent behaviours and complex functional dependencies amongst their components. Therefore, to capture the full system failure behaviour, it is not enough to simply determine the consequences of different combinations of failure events: it is also necessary to understand the order in which they fail. Pandora temporal fault trees (TFTs) increase the expressive power of fault trees and allow modelling of sequence-dependent failure behaviour of systems. However, like classical fault tree analysis, TFT analysis requires a lot of manual effort, which makes it time consuming and expensive. This in turn makes it less viable for use in modern, iterated system design processes, which requires a quicker turnaround and consistency across evolutions. In this paper, we propose for a model-based analysis of temporal fault trees via HiP-HOPS, which is a state-of-the-art model-based dependability analysis method supported by tools that largely automate analysis and optimisation of systems. The proposal extends HiP-HOPS with Pandora, Petri Nets and Bayesian Networks and results to dynamic dependability analysis that is more readily integrated into modern design processes. The effectiveness is demonstrated via application to an aircraft fuel distribution system. / Partly funded by the DEIS H2020 project (Grant Agreement 732242).
|
573 |
Software test case generation from system models and specification. Use of the UML diagrams and High Level Petri Nets models for developing software test cases.Alhroob, Aysh M. January 2010 (has links)
The main part in the testing of the software is in the generation
of test cases suitable for software system testing. The quality of the
test cases plays a major role in reducing the time of software system
testing and subsequently reduces the cost. The test cases, in model de-
sign stages, are used to detect the faults before implementing it. This
early detection offers more
flexibility to correct the faults in early stages
rather than latter ones. The best of these tests, that covers both static
and dynamic software system model specifications, is one of the chal-
lenges in the software testing. The static and dynamic specifications
could be represented efficiently by Unified Modelling Language (UML)
class diagram and sequence diagram. The work in this thesis shows that
High Level Petri Nets (HLPN) can represent both of them in one model.
Using a proper model in the representation of the software specifications
is essential to generate proper test cases.
The research presented in this thesis introduces novel and automated
test cases generation techniques that can be used within a software sys-
tem design testing. Furthermore, this research introduces e cient au-
tomated technique to generate a formal software system model (HLPN)
from semi-formal models (UML diagrams). The work in this thesis con-
sists of four stages: (1) generating test cases from class diagram and
Object Constraint Language (OCL) that can be used for testing the
software system static specifications (the structure) (2) combining class
diagram, sequence diagram and OCL to generate test cases able to cover
both static and dynamic specifications (3) generating HLPN automat-
ically from single or multi sequence diagrams (4) generating test cases
from HLPN.
The test cases that are generated in this work covered the structural
and behavioural of the software system model. In first two phases of this
work, the class diagram and sequence diagram are decomposed to nodes
(edges) which are linked by Classes Hierarchy Table (CHu) and Edges
Relationships Table (ERT) as well. The linking process based on the
classes and edges relationships. The relationships of the software system
components have been controlled by consistency checking technique, and
the detection of these relationships has been automated. The test cases
were generated based on these interrelationships. These test cases have
been reduced to a minimum number and the best test case has been
selected in every stage. The degree of similarity between test cases is
used to ignore the similar test cases in order to avoid the redundancy.
The transformation from UML sequence diagram (s) to HLPN facilitates
the simpli cation of software system model and introduces formal model
rather than semi-formal one. After decomposing the sequence diagram
to Combined Fragments, the proposed technique converts each Combined
Fragment to the corresponding block in HLPN. These blocks are con-
nected together in Combined Fragments Net (CFN) to construct the the
HLPN model. The experimentations with the proposed techniques show
the effectiveness of these techniques in covering most of the software
system specifications.
|
574 |
IN VIVO STUDIES OF CELL-FREE DNA AND DNASE IN A MURINE MODEL OF POLYMICROBIAL SEPSISMai, Safiah Hwai Chuen January 2016 (has links)
Sepsis is a clinical syndrome characterized by the systemic activation of inflammatory and coagulation pathways in response to microbial infection of normally sterile parts of the body. Despite considerable advances in our understanding of sepsis pathophysiology, sepsis remains the leading cause of death in non-coronary intensive care units (ICU) with a global disease burden between 15 and 19 million cases per year (Dellinger et al., 2008). Severe sepsis, defined as sepsis associated with organ dysfunction is associated with mortality rates of 33% to 45%. The incidence of severe sepsis continues to increase by 1.5% per annum due to the aging population, a rise in the prevalence of comorbidities, and the wider use of immunosuppressive agents and invasive procedures (Angus et al., 2001). Over the past several decades, many potential treatments for sepsis have shown early promise, yet have failed to improve survival in over 100 Phase II and Phase III clinical trials (Marshall, 2014) suggesting that some fundamental knowledge is lacking in our understanding of sepsis pathophysiology.
Emerging studies on cell-free DNA (cfDNA), DNA released extracellularly into the circulation, demonstrate that cfDNA is a crucial link between inflammation and coagulation . In various conditions characterized by excessive inflammatory responses or aberrant prothrombotic responses, cfDNA has been implicated in exacerbating disease pathology (Atamaniuk, Kopecky, Skoupy, Säemann, & Weichhart, 2012; Fuchs, Brill, & Wagner, 2012; Swystun, Mukherjee, & Liaw, 2011). In clinical sepsis, levels of cfDNA upon admission into the ICU have strong prognostic value in predicting mortality (Dwivedi et al., 2012; Saukkonen et al., 2008). However, it is unclear whether these increases in cfDNA are an epiphenomenon during sepsis progression, or whether cfDNA actively plays a role in sepsis pathophysiology. In this work, in vivo studies were conducted to characterize the role of cfDNA in sepsis, the effects of DNase administration, and the potential mechanism by which cfDNA is released during experimental sepsis. In addition, mortality studies were conducted to identify surrogate markers of death to promote the design of humane and ethical animal studies in conducting sepsis research.
Polymicrobial sepsis was induced via a surgical procedure whereby the cecum is exteriorized, ligated and punctured twice to introduce a continuous source of microorganisms, a model termed cecal ligation and puncture (CLP). In our CLP sepsis model, levels of cfDNA increased in a time-dependent manner. These increases accompanied an early pro-inflammatory response marked by increased pro-inflammatory IL-6, a transient increase in anti-inflammatory IL-10, and elevated lung myeloperoxidase (MPO) activity. Septic mice with elevated cfDNA levels also had high bacterial loads in the lungs, blood, and peritoneal cavity fluid. Organ damage was also observed in mice following CLP surgery versus mice subjected to the non-septic sham control surgery marked by increased levels of creatinine and alanine aminotransferase (ALT) indicative of kidney and liver injury, respectively. Histological analyses further confirmed lung and kidney damage following CLP surgery. Changes in coagulation were also observed in septic mice as mice subjected to CLP had sustained increases in thrombin-antithrombin (TAT) complexes. In addition, plasma from CLP-operated mice had increased thrombin generation (i.e. increased endogenous thromin potential, increased peak thrombin, decreased time to peak, and decreased lag time) mediated by FXIIa and enhanced by platelets. Following CLP-induced sepsis, elevations in cfDNA levels accompanied pro-inflammatory and pro-coagulant responses.
The effects of in vivo DNase treatment in septic mice were time-dependent. Early DNase treatment when cfDNA levels were low resulted in an exaggerated pro-inflammatory response marked by increased plasma IL-6 levels and increased lung damage. In contrast, delayed DNase treatment at time-points when cfDNA levels were elevated suppressed inflammation characterized by an increase in anti-inflammatory IL-10 and reductions in cfDNA, IL-6, lung MPO, and ALT activity. Furthermore, delayed DNase administration resulted in decreased bacterial load in the lungs, blood, and peritoneal cavity fluid. Delayed DNase treatment also resulted in blunted pro-coagulant responses as levels of TAT complexes were suppressed and thrombin generation from septic mouse plasma was normalized. Moreover, DNase treatment when cfDNA levels were elevated increased survival in CLP-operated mice by 80% and reduced lung and liver damage. These findings suggest that administration of DNase when cfDNA levels are elevated may reduce pro-inflammatory and pro-coagulant responses and that delayed DNase treatment may infer protection in the CLP model of sepsis.
One mechanism by which cfDNA is released is via the formation of neutrophil extracellular traps (NETs). Upon inflammatory stimulation, some neutrophils release chromatin material and antimicrobial proteins (i.e. neutrophil elastase, MPO, and histones) in an active process termed NETosis. Although NETs ensnare bacteria and exert antimicrobial properties, NETs may also exert harmful effects on the host by activating inflammation and coagulation. While some in vitro evidence suggest that neutrophils are the main source of cfDNA released following inflammatory stimulation, others have reported that neutrophils are not the main source of circulating cfDNA following septic challenge. To determine whether NETs contribute to cfDNA released during CLP sepsis, genetically modified mice that are incapable of forming NETs, PAD4-/- mice, were used. Levels of cfDNA in PAD-/- mice were significantly lower than cfDNA levels in C57Bl/6 mice following CLP surgery, suggesting that NETs were a source of cfDNA in our model. Levels of IL-6, MPO, and bacterial load in the lungs, blood, and peritoneal cavity were significantly reduced, indicating that NETs exert pro-inflammatory effects in CLP sepsis. Thrombin generation was also suppressed in PAD4-/- mice which suggests that NETs contribute to thrombin generation following CLP sepsis. NETs contribute to increases in circulating cfDNA and may exacerbate pathology by driving pro-inflammatory and pro-coagulant responses in CLP-induced sepsis.
Appreciating the implications of conducting research using animals, it is pertinent that researchers ensure the highest ethical standards and design animal studies in the most humane, yet scientifically rigorous manner. Using mortality studies, we validated the utility of physiological and phenotypic markers to assess disease severity and predict death in murine sepsis. Temperature via a rectal probe monitor and sepsis scoring systems which assess components such as orbital tightening, level of consciousness, and activity were effective surrogate markers of death. These tools offer a non-invasive assessment of disease progression which do not artificially exacerbate sepsis pathology and immediate information regarding any changes in the health status. Surrogate markers of death also provide reliable monitoring to meet increasing standards of ethical, humane animal research and a feasible and cost-efficient means to obtain vital signs in small rodents. We have proposed a scoring system which can be used for assessing disease severity, endpoint monitoring, and predicting death to obviate inhumane methods of using death as an endpoint in sepsis studies.
In summary, cfDNA levels are elevated in CLP-induced sepsis and these elevations accompany pro-inflammatory and pro-coagulant responses. NETosis may be a mechanism by which cfDNA is released and NETs may drive inflammation and coagulation in CLP sepsis. Delayed DNase administration may suppress inflammation and coagulation and may be protective in polymicrobial sepsis. In future animal sepsis studies, surrogate markers of death and a sepsis scoring system can be used in place of death as an endpoint to raise the standards in conducting ethical, humane sepsis research. / Thesis / Doctor of Philosophy (PhD)
|
575 |
Probabilistic guarantees in model-checking with Time Petri NetsLecart, Manon January 2023 (has links)
With the prevalence of technology and computer systems in today’s society, it is crucial to ensure that the systems we use are secure. The fields that study these issues, cybersecurity and cybersafety, use the formal verification technique of modelchecking. This paper tackles one aspect of the work needed to develop model-checking methods as we try to improve the efficiency and the reliability of model-checking techniques using the Time Petri Net model. Formal methods based on Time Petri Nets are not exempt from the state-explosion problem, and we study here different approaches to circumvent this problem. In particular, we show that limiting the exploration of such a model to runs with integer dates maintains the integrity of the model-checking result. We also show that it is possible to set a limit on the number of runs that can be explored while maintaining the probability that the observation is correct above a certain threshold. / Med tanke på hur vanligt det är med teknik och datorsystem i dagens samhälle är det viktigt att se till att de system vi använder är säkra. De områden som studerar dessa frågor, cybersäkerhet och cybersafety, använder den formella verifieringstekniken modellkontroll. Denna artikel tar upp en aspekt av det arbete som krävs för att utveckla metoder för modellkontroll, eftersom vi försöker förbättra effektiviteten och tillförlitligheten hos metoder för modellkontroll med hjälp av Time Petri Netmodellen. Formella metoder baserade på Time Petri Nets är inte undantagna från problemet med tillståndsexplosion, och vi studerar här olika tillvägagångssätt för att kringgå detta problem. I synnerhet visar vi att om man begränsar utforskningen av en sådan modell till körningar med heltalsdatum bibehålls integriteten hos resultatet av modellkontrollen. Vi visar också att det är möjligt att sätta en gräns för antalet körningar som kan utforskas samtidigt som sannolikheten för att observationen är korrekt hålls över ett visst tröskelvärde.
|
576 |
[en] DEALING WITH DECISION POINTS IN PROCESS MINING / [pt] TRATANDO PONTOS DE DECISÃO EM MINERAÇÃO DE PROCESSOSDANIEL DUQUE GUIMARAES SARAIVA 26 April 2019 (has links)
[pt] Devido ao grande aumento da competitividade e da, cada vez maior, demanda por eficiência, muitas empresas perceberam que é necessário repensar e melhorar seus processos. Para atingir este objetivo, elas têm cada vez mais buscado técnicas computacionais que sejam capazes de extrair novas informações e conhecimentos de suas grandes bases de dados. Os processos das empresas, normalmente, possuem momentos em que uma decisão deve ser tomada. É razoável esperar que casos similares tenham decisões parecidas sendo tomadas ao longo do processo. O objetivo desta dissertação é criar um minerador de decisão que seja capaz the automatizar a tomada de decisão dentro de um processo. A primeira parte do trabalho consiste na identificação dos pontos de decisão em uma rede de Petri. Em seguida, transformamos a tomada de decisão em um problema de classificação no qual cada possibilidade da decisão se torna uma classe. Para fazer a automatização, é utilizada uma árvore de decisão treinada com os atributos dos dados que estão presentes nos logs dos eventos. Um estudo de caso real é utilizado para validar que o minerador de decisão é confiável para processos reais. / [en] Due to the increasing competitiveness and demand for higher performance, many companies realized that it is necessary to rethink and enhance their business processes. In order to achieve this goal, companies have been turning to computational techniques that are capable of extracting new information and insights from their, ever-increasing, datasets. Business processes, normally, have many places where a decision has to be made. It is reasonable to expect that similar inputs have the same decisions made to them during the process. The goal of this dissertation is to create a decision miner that automates the decision-making inside a process. First, we will identify decision points in a Petri net model. Then, we will transform the decision-making problem into a classification one, where each of the possible decisions becomes a class. In order to automate the decision-making, a decision tree is trained using data attributes from the event logs. A real world case study is used to validate that the decision miner is reliable when using real world data.
|
577 |
Slot-Exchange Mechanisms and Weather-Based Rerouting within an Airspace Planning and Collaborative Decision-Making ModelMcCrea, Michael Victor 18 April 2006 (has links)
We develop and evaluate two significant modeling concepts within the context of a large-scale Airspace Planning and Collaborative Decision-Making Model (APCDM) and, thereby, enhance its current functionality in support of both strategic and tactical level flight assessments. The first major concept is a new severe weather-modeling paradigm that can be used to assess existing tactical en route flight plan strategies such as the Flight Management System (FMS) as well as to provide rerouting strategies. The second major concept concerns modeling the mediated bartering of slot exchanges involving airline trade offers for arrival/departure slots at an arrival airport that is affected by the Ground Delay Program (GDP), while simultaneously considering issues related to sector workloads, airspace conflicts, as well as overall equity concerns among the airlines. This research effort is part of an $11.5B, 10-year, Federal Aviation Administration (FAA)-sponsored program to increase the U.S. National Airspace (NAS) capacity by 30 percent by the year 2010.
Our innovative contributions of this research with respect to the severe weather rerouting include (a) the concept of "Probability-Nets" and the development of discretized representations of various weather phenomena that affect aviation operations; (b) the integration of readily accessible severe weather probabilities from existing weather forecast data provided by the National Weather Service (NWS); (c) the generation of flight plans that circumvent severe weather phenomena with specified probability levels, and (d) a probabilistic delay assessment methodology for evaluating planned flight routes that might encounter potentially disruptive weather along its trajectory. Given a fixed set of reporting stations from the CONUS Model Output Statistics (MOS), we begin by constructing weather-specific probability-nets that are dynamic with respect to time and space. Essential to the construction of the probability-nets are the point-by-point forecast probabilities associated with MOS reporting sites throughout the United States. Connections between the MOS reporting sites form the strands within the probability-nets, and are constructed based upon a user-defined adjacency threshold, which is defined as the maximum allowable great circle distance between any such pair of sites. When a flight plan traverses through a probability-net, we extract probability data corresponding to the points where the flight plan and the probability-net strand(s) intersect. The ability to quickly extract this trajectory-related probability data is critical to our weather-based rerouting concepts and the derived expected delay and related cost computations in support of the decision-making process.
Next, we consider the superimposition of a flight-trajectory-grid network upon the probability-nets. Using the U.S. Navigational Aids (Navaids) as the network nodes, we develop an approach to generate flight plans that can circumvent severe weather phenomena with specified probability levels based on determining restricted, time-dependent shortest paths between the origin and destination airports. By generating alternative flight plans pertaining to specified threshold strand probabilities, we prescribe a methodology for computing appropriate expected weather delays and related disruption factors for inclusion within the APCDM model.
We conclude our severe weather-modeling research by conducting an economic benefit analysis using a k-means clustering mechanism in concert with our delay assessment methodology in order to evaluate delay costs and system disruptions associated with variations in probability-net refinement-based information. As a flight passes through the probability-net(s), we can generate a probability-footprint that acts as a record of the strand intersections and the associated probabilities from origin to destination. A flight plan's probability-footprint will differ for each level of data refinement, from whence we construct route-dependent scenarios and, subsequently, compute expected weather delay costs for each scenario for comparative purposes.
Our second major contribution is the development of a novel slot-exchange modeling concept within the APCDM model that incorporates various practical issues pertaining to the Ground Delay Program (GDP), a principal feature in the FAA's adoption of the Collaborative Decision-Making (CDM) paradigm. The key ideas introduced here include innovative model formulations and several new equity concepts that examine the impact of "at-least, at-most" trade offers on the entire mix of resulting flight plans from respective origins to destinations, while focusing on achieving defined measures of "fairness" with respect to the selected slot exchanges. The idea is to permit airlines to barter assigned slots at airports affected by the Ground Delay Program to their mutual advantage, with the FAA acting as a mediator, while being cognizant of the overall effect of the resulting mix of flight plans on air traffic control sector workloads, collision risk and safety, and equity considerations.
We start by developing two separate slot-exchange approaches. The first consists of an external approach in which we formulate a model for generating a set of package-deals, where each package-deal represents a potential slot-exchange solution. These package-deals are then embedded within the APCDM model. We further tighten the model representation using maximal clique cover-based cuts that relate to the joint compatibility among the individual package-deals. The second approach significantly improves the overall model efficiency by automatically generating package-deals as required within the APCDM model itself. The model output prescribes a set of equitable flight plans based on admissible trades and exchanges of assigned slots, which are in addition conformant with sector workload capabilities and conflict risk restrictions. The net reduction in passenger-minutes of delay for each airline is the primary metric used to assess and compare model solutions. Appropriate constraints are included in the model to ensure that the generated slot exchanges induce nonnegative values of this realized net reduction for each airline.
In keeping with the spirit of the FAA's CDM initiative, we next propose four alternative equity methods that are predicated on different specified performance ratios and related efficiency functions. These four methods respectively address equity with respect to slot-exchange-related measures such as total average delay, net delay savings, proportion of acceptable moves, and suitable value function realizations.
For our computational experiments, we constructed several scenarios using real data obtained from the FAA based on the Enhanced Traffic Management System (ETMS) flight information pertaining to the Miami and Jacksonville Air Route Traffic Control Centers (ARTCC). Through our experimentation, we provide insights into the effect of the different proposed modeling concepts and study the sensitivity with respect to certain key parameters. In particular, we compare the alternative proposed equity formulations by evaluating their corresponding slot-exchange solutions with respect to the net reduction in passenger-minutes of delay for each airline. Additionally, we evaluate and compare the computational-effort performance, under both time limits and optimality thresholds, for each equity method in order to assess the efficiency of the model. The four slot-exchange-based equity formulations, in conjunction with the internal slot-exchange mechanisms, demonstrate significant net savings in computational effort ranging from 25% to 86% over the original APCDM model equity formulation.
The model has been implemented using Microsoft Visual C++ and evaluated using a C++ interface with CPLEX 9.0. The overall results indicate that the proposed modeling concepts offer viable tools that can be used by the FAA in a timely fashion for both tactical purposes, as well as for exploring various strategic issues such as air traffic control policy evaluations; dynamic airspace resectorization strategies as a function of severe weather probabilities; and flight plan generation in response to various disruption scenarios. / Ph. D.
|
578 |
Changements climatiques, quel avenir pour le risque du paludisme en Ouganda ?Sadoine, Margaux 11 1900 (has links)
Le paludisme, qui est la maladie à transmission vectorielle la plus répandue, provoque depuis quelques années de plus en plus d’épidémies liées à des anomalies climatiques. Dans de nombreux pays endémiques comme l’Ouganda, les changements climatiques représentent une préoccupation importante pour la santé publique. Des débats existent toutefois quant à l’évolution future du paludisme car la majorité des études de prédictions ne considèrent pas les effets de certains facteurs anthropiques qui influencent la transmission (ex. les interventions de contrôle antivectorielles).
Ainsi, les objectifs de cette thèse étaient donc 1) d’estimer les associations entre le risque du paludisme, des variables de l’environnement (comme les précipitations, la température, l’humidité et la végétation) et les interventions antivectorielles (moustiquaires imprégnées d’insecticide longue durée, MILD, et pulvérisation intra-domiciliaire, PID) pour 2) prédire la distribution du paludisme selon des scénarios de climat futur.
À cette fin, les associations ont été étudiées à partir (i) des données d’une cohorte d’enfants de trois sous régions d’Ouganda à partir de modèles mixtes linéaires généralisés basés sur une distribution log-binomiale; (ii) des données de surveillance passive du paludisme dans la population générale de six sous régions, à partir de modèles mixtes linéaires généralisés basés sur une distribution binomiale négative. Les associations étudiées au sein de la population générale ont ensuite été utilisées pour projeter le risque futur selon 14 simulations climatiques et deux scénarios d’émission de gaz à effet de serre (RCP4.5 et RCP8.5).
Pour le premier objectif, les résultats de l’analyse des données de la cohorte infantile ont mis en évidence une variabilité sous régionale dans la forme (linéaire et non linéaire), la direction et l'ampleur des associations entre les variables de l’environnement et le risque de paludisme. L'ajustement du modèle de régression pour la PID a modifié l'ampleur et/ou la direction des associations environnement-paludisme, suggérant un effet d'interaction.
À partir des données de la population générale, l’analyse groupée des six sous régions a montré que les interventions réduisaient le risque de paludisme d'environ 35 % avec les MILD et de 63 % avec la PID; des interactions significatives ont été observées entre certaines variables environnementales et les interventions de lutte antivectorielle. À l’échelle sous régionale, une variabilité de la forme des relations environnement-paludisme (linéarité, non linéarité, direction) et de l'influence des interventions a aussi été observée.
Les prédictions du risque de paludisme avec les changements climatiques suggèrent des tendances à la hausse des cas de paludisme en absence d’interventions à l’horizon 2050, bien qu’une grande variabilité dans les prédictions existent selon le modèle de climat considéré (médianes et min-max de la période historique vs RCP 4.5 : 16 785, 9 902 - 74 382 vs 21 289, 11 796 - 70 606). En considérant l’effet des interventions, une réduction du nombre de cas annuels médian de 35%, 63% et à 76% est prédite avec les MILD seules, la PID seule et la combinaison de MILD et PID, respectivement.
Cette thèse a donc permis de clarifier l’influence des MILD et de la PID sur les relations entre variables de l’environnement et le paludisme et de démontrer l’importance de considérer les mesures de contrôle antivectorielle dans les analyses du risque épidémiologique de paludisme et dans les prédictions du risque avec les changements climatiques. / Malaria, which is the most widespread vector-borne disease, has in recent years caused more and more epidemics linked to climatic anomalies. In several malaria endemic countries such as Uganda, climate change is a major public health concern. Debates exist, however, about the future evolution of malaria in relation to climate as the majority of prediction studies do not consider the effects of certain anthropogenic factors that influence transmission (e.g. vector control interventions).
Therefore, the objectives of this thesis were 1) to estimate the associations between malaria risk, environmental variables (such as precipitation, temperature, humidity which are related to climate and vegetation) and vector interventions (long-lasting insecticide-treated bed nets - LLINs, and indoor residential spraying - IRS), and to 2) predict malaria distribution under future climate scenarios.
The associations were studied with (i) data from a cohort of children from three sub-regions of Uganda using generalized linear mixed models based on a log-binomial distribution; (ii) data from passive surveillance of malaria in the general population of six sub-regions, using generalized linear mixed models based on a negative binomial distribution. The associations studied in the general population were then used to predict future risk under 14 climate simulations and two greenhouse gas emission scenarios (RCP4.5 and RCP8.5).
For the first objective, the results of the analysis of the infant cohort data highlighted a sub-regional variability in the form (linear and nonlinear), the direction and the magnitude of the associations between the environmental variables. and the risk of malaria. Adjusting the regression model for IRS changed the magnitude and/or direction of environment-malaria associations, suggesting an interaction effect.
Using data from the general population, the pooled analysis of the six sub-regions showed that the interventions reduced malaria risk by approximately 35% with LLINs and by 63% with IRS; significant interactions were observed between some environmental variables and vector control interventions. At the sub-regional scale, variability in the form of environment-malaria relationships (linearity, non-linearity, direction) and in the influence of interventions was also observed.
Predictions of malaria risk with climate change suggest upward trends in malaria cases in the absence of interventions by 2050, although great variability in the predictions exists depending on the climate model considered (medians and min-max of the historical period vs RCP 4.5: 16,785, 9,902 - 74,382 vs 21,289, 11,796 - 70,606). Considering the effect of interventions, a reduction in the median number of annual cases of 35%, 63% and 76% is predicted for LLINs alone, IRS alone and the combination of LLINs and IRS, respectively.
This thesis examined the influence of LLINs and IRS on the relationships between environmental variables and malaria and demonstrates the importance of considering vector control measures in analyzes of the epidemiological risk of malaria and in its prediction with climate change.
|
579 |
ADVANCING CARBON NEUTRALITY : Techno-economic analysis of Direct Air Capture at commercial scaleNilsson, Martin January 2024 (has links)
In light of escalating concerns over climate change and the imperative to mitigate greenhouse gas emissions, particularly carbon emissions, the pursuit of negative emissions technologies (NETs) has gained significant attention. Direct air capture (DAC) stands out as a promising avenue, offering the potential to actively remove carbon dioxide from the atmosphere. This degree project provides a thorough examination of two leading DAC projects, Mammoth and Stratos, which exemplify innovative approaches to achieving negative emissions at scale. By employing low-temperature DAC (LT DAC) and high-temperature DAC (HT DAC) respectively, Mammoth and Stratos confront the challenge of carbon capture with distinct technological strategies. This degree project employs a Techno-Economic Analysis (TEA) to estimate the Levelized Cost of CO2 Capture through DAC (LCOD), revealing Mammoth'sLCOD at $260/tCO2and Stratos at $608/tCO2, excluding costs for carbon transport and storage.The TEA is followed up with a Sensitivity Analysis to assess how the LCOD is affected by variations in input parameters, such as capital costs and electricity demand/costs among several parameters. Furthermore, this degree project identifies that uncertainties remain regarding the carbon storage solution, including its efficiency, long-term environmental implications, and associated costs. Given the Stratos projects’ dependence on Enhanced Oil Recovery (EOR) as the method of storing the captured carbon, the concern regarding efficiency and environmental implications is particularly relevant, as this method could potentially optimize oil production by 5-20%. As the discourse on DAC continues to evolve, this degree project advocates for the integration of Life Cycle Analysis (LCA) to comprehensively evaluate environmental impactsof both projects. This would guide the path towards sustainable carbon capture solutions, aiding in informed decision-making and guiding future DAC endeavors.
|
580 |
Coverability and expressiveness properties of well-structured transition systemsGeeraerts, Gilles 20 April 2007 (has links)
Ces cinquante dernières annéees, les ordinateurs ont occupé une place toujours plus importante dans notre vie quotidienne. On les retrouve aujourd’hui présents dans de nombreuses applications, sous forme de systèmes enfouis. Ces applications sont parfois critiques, dans la mesure où toute défaillance du système informatique peut avoir des conséquences catastrophiques, tant sur le plan humain que sur le plan économique. <p>Nous pensons par exemple aux systèmes informatiques qui contrôlent les appareils médicaux ou certains systèmes vitaux (comme les freins) des véhicules automobiles. <p>Afin d’assurer la correction de ces systèmes informatiques, différentes techniques de vérification Assistée par Ordinateur ont été proposées, durant les trois dernières <p>décennies principalement. Ces techniques reposent sur un principe commun: donner une description formelle tant du système que de la propriété qu’il doit respecter, et appliquer une méthode automatique pour prouver que le système respecte la propriété. <p>Parmi les principaux modèles aptes à décrire formellement des systèmes informatiques, la classe des systèmes de transition bien structurés [ACJT96, FS01] occupe une place importante, et ce, pour deux raisons essentielles. Tout d’abord, cette classe généralise plusieurs autres classes bien étudiées et utiles de modèles à espace <p>d’états infini, comme les réseaux de Petri [Pet62](et leurs extensions monotones [Cia94, FGRVB06]) ou les systèmes communiquant par canaux FIFO avec pertes [AJ93]. Ensuite, des problèmes intéressants peuvent être résolus algorithmiquement sur cette classe. Parmi ces problèmes, on trouve le probléme de couverture, auquel certaines propriétés intéressantes de sûreté peuvent être réduites. <p>Dans la première partie de cette thèse, nous nous intéressons au problème de couverture. Jusqu’à présent, le seul algorithme général (c’est-à-dire applicable à n’importe quel système bien structuré) pour résoudre ce problème était un algorithme dit en arrière [ACJT96] car il calcule itérativement tous les états potentiellement non-sûrs et vérifie si l’état initial du système en fait partie. Nous proposons Expand, Enlarge and Check, le premier algorithme en avant pour résoudre le problème de couverture, qui calcule les états potentiellement accessibles du système et vérifie si certains d’entre eux sont non-sûrs. Cette approche est plus efficace en pratique, comme le montrent nos expériences. Nous présentons également des techniques permettant d’accroître l’efficacité de notre méthode dans le cas où nous analysons des réseaux de Petri (ou <p>une de leurs extensions monotones), ou bien des systèmes communiquant par canaux FIFO avec pertes. Enfin, nous nous intéressons au calcul de l’ensemble de couverture pour les réseaux de Petri, un objet mathématique permettant notamment de résoudre le problème de couverture. Nous étudions l’algorithme de Karp & Miller [KM69], une solution classique pour calculer cet ensemble. Nous montrons qu’une optimisation de cet algorithme présenté dans [Fin91] est fausse, et nous proposons une autre solution totalement neuve, et plus efficace que la solution de Karp & Miller. <p>Dans la seconde partie de la thèse, nous nous intéressons aux pouvoirs d’expression des systèmes bien structurés, tant en terme de mots infinis que de mots finis. Le pouvoir d’expression d’une classe de systèmes est, en quelque sorte, une mesure de la diversité des comportements que les modèles de cette classe peuvent représenter. En ce qui concerne les mots infinis, nous étudions les pouvoirs d’expression des réseaux de Petri et de deux de leurs extensions (les réseaux de Petri avec arcs non-bloquants et les réseaux de Petri avec arcs de transfert). Nous montrons qu’il existe une hiérarchie stricte entre ces différents pouvoirs d’expression. Nous obtenons également des résultats partiels concernant le pouvoir d’expression des réseaux de Petri avec arcs de réinitialisation. En ce qui concerne les mots finis, nous introduisons la classe des langages bien structurés, qui sont des langages acceptés par des systèmes de transition bien structurés étiquettés, où l’ensemble des états accepteurs est clos par le haut. Nous prouvons trois lemmes de pompage concernant ces langages. Ceux-ci nous permettent de réobtenir facilement des résultats classiques de la littérature, ainsi que plusieurs nouveaux résultats. En particulier, nous prouvons, comme dans le cas des mots infinis, qu’il existe une hiérarchie stricte entre les pouvoirs d’expression des extensions des réseaux de Petri considérées. / Doctorat en sciences, Spécialisation Informatique / info:eu-repo/semantics/nonPublished
|
Page generated in 0.0514 seconds