• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 348
  • 128
  • 49
  • 39
  • 12
  • 10
  • 9
  • 7
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 712
  • 183
  • 95
  • 88
  • 87
  • 76
  • 69
  • 54
  • 53
  • 53
  • 53
  • 51
  • 49
  • 43
  • 42
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
431

Identifying Induced Bias in Machine Learning

Chowdhury Mohammad Rakin Haider (18414885) 22 April 2024 (has links)
<p dir="ltr">The last decade has witnessed an unprecedented rise in the application of machine learning in high-stake automated decision-making systems such as hiring, policing, bail sentencing, medical screening, etc. The long-lasting impact of these intelligent systems on human life has drawn attention to their fairness implications. A majority of subsequent studies targeted the existing historically unfair decision labels in the training data as the primary source of bias and strived toward either removing them from the dataset (de-biasing) or avoiding learning discriminatory patterns from them during training. In this thesis, we show label bias is not a necessary condition for unfair outcomes from a machine learning model. We develop theoretical and empirical evidence showing that biased model outcomes can be introduced by a range of different data properties and components of the machine learning development pipeline.</p><p dir="ltr">In this thesis, we first prove that machine learning models are expected to introduce bias even when the training data doesn’t include label bias. We use the proof-by-construction technique in our formal analysis. We demonstrate that machine learning models, trained to optimize for joint accuracy, introduce bias even when the underlying training data is free from label bias but might include other forms of disparity. We identify two data properties that led to the introduction of bias in machine learning. They are the group-wise disparity in the feature predictivity and the group-wise disparity in the rates of missing values. The experimental results suggest that a wide range of classifiers trained on synthetic or real-world datasets are prone to introducing bias under feature disparity and missing value disparity independently from or in conjunction with the label bias. We further analyze the trade-off between fairness and established techniques to improve the generalization of machine learning models such as adversarial training, increasing model complexity, etc. We report that adversarial training sacrifices fairness to achieve robustness against noisy (typically adversarial) samples. We propose a fair re-weighted adversarial training method to improve the fairness of the adversarially trained models while sacrificing minimal adversarial robustness. Finally, we observe that although increasing model complexity typically improves generalization accuracy, it doesn’t linearly improve the disparities in the prediction rates.</p><p dir="ltr">This thesis unveils a vital limitation of machine learning that has yet to receive significant attention in FairML literature. Conventional FairML literature reduces the ML fairness task to as simple as de-biasing or avoiding learning discriminatory patterns. However, the reality is far away from it. Starting from deciding on which features collect up to algorithmic choices such as optimizing robustness can act as a source of bias in model predictions. It calls for detailed investigations on the fairness implications of machine learning development practices. In addition, identifying sources of bias can facilitate pre-deployment fairness audits of machine learning driven automated decision-making systems.</p>
432

Robust Representation Learning for Out-of-Distribution Extrapolation in Relational Data

Yangze Zhou (18369795) 17 April 2024 (has links)
<p dir="ltr">Recent advancements in representation learning have significantly enhanced the analysis of relational data across various domains, including social networks, bioinformatics, and recommendation systems. In general, these methods assume that the training and test datasets come from the same distribution, an assumption that often fails in real-world scenarios due to evolving data, privacy constraints, and limited resources. The task of out-of-distribution (OOD) extrapolation emerges when the distribution of test data differs from that of the training data, presenting a significant, yet unresolved challenge within the field. This dissertation focuses on developing robust representations for effective OOD extrapolation, specifically targeting relational data types like graphs and sets. For successful OOD extrapolation, it's essential to first acquire a representation that is adequately expressive for tasks within the distribution. In the first work, we introduce Set Twister, a permutation-invariant set representation that generalizes and enhances the theoretical expressiveness of DeepSets, a simple and widely used permutation-invariant representation for set data, allowing it to capture higher-order dependencies. We showcase its implementation simplicity and computational efficiency, as well as its competitive performances with more complex state-of-the-art graph representations in several graph node classification tasks. Secondly, we address OOD scenarios in graph classification and link prediction tasks, particularly when faced with varying graph sizes. Under causal model assumptions, we derive approximately invariant graph representations that improve extrapolation in OOD graph classification task. Furthermore, we provide the first theoretical study of the capability of graph neural networks for inductive OOD link prediction and present a novel representation model that produces structural pairwise embeddings, maintaining predictive accuracy for OOD link prediction as the test graph size increases. Finally, we investigate the impact of environmental data as a confounder between input and target variables, proposing a novel approach utilizing an auxiliary dataset to mitigate distribution shifts. This comprehensive study not only advances our understanding of representation learning in OOD contexts but also highlights potential pathways for future research in enhancing model robustness across diverse applications.</p>
433

Robustness and stability in dynamic constraint satisfaction problems

Climent Aunés, Laura Isabel 07 January 2014 (has links)
Constraint programming is a paradigm wherein relations between variables are stated in the form of constraints. It is well-known that many real life problems can be modeled as Constraint Satisfaction Problems (CSPs). Much effort has been spent to increase the efficiency of algorithms for solving CSPs. However, many of these techniques assume that the set of variables, domains and constraints involved in the CSP are known and fixed when the problem is modeled. This is a strong limitation because many problems come from uncertain and dynamic environments, where both the original problem may evolve because of the environment, the user or other agents. In such situations, a solution that holds for the original problem can become invalid after changes. There are two main approaches for dealing with these situations: reactive and proactive approaches. Using reactive approaches entails re-solving the CSP after each solution loss, which is a time consuming. That is a clear disadvantage, especially when we deal with short-term changes, where solution loss is frequent. In addition, in many applications, such as on-line planning and scheduling, the delivery time of a new solution may be too long for actions to be taken on time, so a solution loss can produce several negative effects in the modeled problem. For a task assignment production system with several machines, it could cause the shutdown of the production system, the breakage of machines, the loss of the material/object in production, etc. In a transport timetabling problem, the solution loss, due to some disruption at a point, may produce a delay that propagates through the entire schedule. In addition, all the negative effects stated above will probably entail an economic loss. In this thesis we develop several proactive approaches. Proactive approaches use knowledge about possible future changes in order to avoid or minimize their effects. These approaches are applied before the changes occur. Thus, our approaches search for robust solutions, which have a high probability to remain valid after changes. Furthermore, some of our approaches also consider that the solutions can be easily adapted when they did not resist the changes in the original problem. Thus, these approaches search for stable solutions, which have an alternative solution that is similar to the previous one and therefore can be used in case of a value breakage. In this context, sometimes there exists knowledge about the uncertain and dynamic environment. However in many cases, this information is unknown or hard to obtain. For this reason, for the majority of our approaches (specifically 3 of the 4 developed approaches), the only assumptions made about changes are those inherent in the structure of problems with ordered domains. Given this framework and therefore the existence of a significant order over domain values, it is reasonable to assume that the original bounds of the solution space may undergo restrictive or relaxed modifications. Note that the possibility of solution loss only exists when changes over the original bounds of the solution space are restrictive. Therefore, the main objective for searching robust solutions in this framework is to find solutions located as far away as possible from the bounds of the solution space. In order to meet this criterion, we propose several approaches that can be divided in enumeration-based techniques and a search algorithm. / Climent Aunés, LI. (2013). Robustness and stability in dynamic constraint satisfaction problems [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/34785
434

Robust strategies for glucose control in type 1 diabetes

Revert Tomás, Ana 15 October 2015 (has links)
[EN] Type 1 diabetes mellitus is a chronic and incurable disease that affects millions of people all around the world. Its main characteristic is the destruction (totally or partially) of the beta cells of the pancreas. These cells are in charge of producing insulin, main hormone implied in the control of blood glucose. Keeping high levels of blood glucose for a long time has negative health effects, causing different kinds of complications. For that reason patients with type 1 diabetes mellitus need to receive insulin in an exogenous way. Since 1921 when insulin was first isolated to be used in humans and first glucose monitoring techniques were developed, many advances have been done in clinical treatment with insulin. Currently 2 main research lines focused on improving the quality of life of diabetic patients are opened. The first one is concentrated on the research of stem cells to replace damaged beta cells and the second one has a more technological orientation. This second line focuses on the development of new insulin analogs to allow emulating with higher fidelity the endogenous pancreas secretion, the development of new noninvasive continuous glucose monitoring systems and insulin pumps capable of administering different insulin profiles and the use of decision-support tools and telemedicine. The most important challenge the scientific community has to overcome is the development of an artificial pancreas, that is, to develop algorithms that allow an automatic control of blood glucose. The main difficulty avoiding a tight glucose control is the high variability found in glucose metabolism. This fact is especially important during meal compensation. This variability, together with the delay in subcutaneous insulin absorption and action causes controller overcorrection that leads to late hypoglycemia (the most important acute complication of insulin treatment). The proposals of this work pay special attention to overcome these difficulties. In that way interval models are used to represent the patient physiology and to be able to take into account parametric uncertainty. This type of strategy has been used in both the open loop proposal for insulin dosage and the closed loop algorithm. Moreover the idea behind the design of this last proposal is to avoid controller overcorrection to minimize hypoglycemia while adding robustness against glucose sensor failures and over/under- estimation of meal carbohydrates. The algorithms proposed have been validated both in simulation and in clinical trials. / [ES] La diabetes mellitus tipo 1 es una enfermedad crónica e incurable que afecta a millones de personas en todo el mundo. Se caracteriza por una destrucción total o parcial de las células beta del páncreas. Estas células son las encargadas de producir la insulina, hormona principal en el control de glucosa en sangre. Valores altos de glucosa en la sangre mantenidos en el tiempo afectan negativamente a la salud, provocando complicaciones de diversa índole. Es por eso que los pacientes con diabetes mellitus tipo 1 necesitan recibir insulina de forma exógena. Desde que se consiguiera en 1921 aislar la insulina para poder utilizarla en clínica humana, y se empezaran a desarrollar las primeras técnicas de monitorización de glucemia, se han producido grandes avances en el tratamiento con insulina. Actualmente, las líneas de investigación que se están siguiendo en relación a la mejora de la calidad de vida de los pacientes diabéticos, tienen fundamentalmente 2 vertientes: una primera que se centra en la investigación en células madre para la reposición de las células beta y una segunda vertiente de carácter más tecnológico. Dentro de esta segunda vertiente, están abiertas varias líneas de investigación, entre las que se encuentran el desarrollo de nuevos análogos de insulina que permitan emular más fielmente la secreción endógena del páncreas, el desarrollo de monitores continuos de glucosa no invasivos, bombas de insulina capaces de administrar distintos perfiles de insulina y la inclusión de sistemas de ayuda a la decisión y telemedicina. El mayor reto al que se enfrentan los investigadores es el de conseguir desarrollar un páncreas artificial, es decir, desarrollar algoritmos que permitan disponer de un control automático de la glucosa. La principal barrera que se encuentra para conseguir un control riguroso de la glucosa es la alta variabilidad que presenta su metabolismo. Esto es especialmente significativo durante la compensación de las comidas. Esta variabilidad junto con el retraso en la absorción y actuación de la insulina administrada de forma subcutánea favorece la aparición de hipoglucemias tardías (complicación aguda más importante del tratamiento con insulina) a consecuencia de la sobreactuación del controlador. Las propuestas presentadas en este trabajo hacen especial hincapié en sobrellevar estas dificultades. Así, se utilizan modelos intervalares para representar la fisiología del paciente, y poder tener en cuenta la incertidumbre en sus parámetros. Este tipo de estrategia se ha utilizado tanto en la propuesta de dosificación automática en lazo abierto como en el algoritmo en lazo cerrado. Además la principal idea de diseño de esta última propuesta es evitar la sobreactuación del controlador evitando hipoglucemias y añadiendo robustez ante fallos en el sensor de glucosa y en la estimación de las comidas. Los algoritmos propuestos han sido validados en simulación y en clínica. / [CA] La diabetis mellitus tipus 1 és una malaltia crònica i incurable que afecta milions de persones en tot el món. Es caracteritza per una destrucció total o parcial de les cèl.lules beta del pàncrees. Aquestes cèl.lules són les encarregades de produir la insulina, hormona principal en el control de glucosa en sang. Valors alts de glucosa en la sang mantinguts en el temps afecten negativament la salut, provocant complicacions de diversa índole. És per això que els pacients amb diabetis mellitus tipus 1 necessiten rebre insulina de forma exògena. Des que s'aconseguís en 1921 aïllar la insulina per a poder utilitzar-la en clínica humana, i es començaren a desenrotllar les primeres tècniques de monitorització de glucèmia, s'han produït grans avanços en el tractament amb insulina. Actualment, les línies d'investigació que s'estan seguint en relació a la millora de la qualitat de vida dels pacients diabètics, tenen fonamentalment 2 vessants: un primer que es centra en la investigació de cèl.lules mare per a la reposició de les cèl.lules beta i un segon vessant de caràcter més tecnològic. Dins d' aquest segon vessant, estan obertes diverses línies d'investigació, entre les que es troben el desenrotllament de nous anàlegs d'insulina que permeten emular més fidelment la secreció del pàncrees, el desenrotllament de monitors continus de glucosa no invasius, bombes d'insulina capaces d'administrar distints perfils d'insulina i la inclusió de sistemes d'ajuda a la decisió i telemedicina. El major repte al què s'enfronten els investigadors és el d'aconseguir desenrotllar un pàncrees artificial, és a dir, desenrotllar algoritmes que permeten disposar d'un control automàtic de la glucosa. La principal barrera que es troba per a aconseguir un control rigorós de la glucosa és l'alta variabilitat que presenta el seu metabolisme. Açò és especialment significatiu durant la compensació dels menjars. Aquesta variabilitat junt amb el retard en l'absorció i actuació de la insulina administrada de forma subcutània afavorix l'aparició d'hipoglucèmies tardanes (complicació aguda més important del tractament amb insulina) a conseqüència de la sobreactuació del controlador. Les propostes presentades en aquest treball fan especial insistència en suportar aquestes dificultats. Així, s'utilitzen models intervalares per a representar la fisiologia del pacient, i poder tindre en compte la incertesa en els seus paràmetres. Aquest tipus d'estratègia s'ha utilitzat tant en la proposta de dosificació automàtica en llaç obert com en l' algoritme en llaç tancat. A més, la principal idea de disseny d'aquesta última proposta és evitar la sobreactuació del controlador evitant hipoglucèmies i afegint robustesa. / Revert Tomás, A. (2015). Robust strategies for glucose control in type 1 diabetes [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/56001
435

Impulsive Control and Synchronization of Chaos-Generating-Systems with Applications to Secure Communication

Khadra, Anmar January 2004 (has links)
When two or more chaotic systems are coupled, they may exhibit synchronized chaotic oscillations. The synchronization of chaos is usually understood as the regime of chaotic oscillations in which the corresponding variables or coupled systems are equal to each other. This kind of synchronized chaos is most frequently observed in systems specifically designed to be able to produce this behaviour. In this thesis, one particular type of synchronization, called impulsive synchronization, is investigated and applied to low dimensional chaotic, hyperchaotic and spatiotemporal chaotic systems. This synchronization technique requires driving one chaotic system, called response system, by samples of the state variables of the other chaotic system, called drive system, at discrete moments. Equi-Lagrange stability and equi-attractivity in the large property of the synchronization error become our major concerns when discussing the dynamics of synchronization to guarantee the convergence of the error dynamics to zero. Sufficient conditions for equi-Lagrange stability and equi-attractivity in the large are obtained for the different types of chaos-generating systems used. The issue of robustness of synchronized chaotic oscillations with respect to parameter variations and time delay, is also addressed and investigated when dealing with impulsive synchronization of low dimensional chaotic and hyperchaotic systems. Due to the fact that it is impossible to design two identical chaotic systems and that transmission and sampling delays in impulsive synchronization are inevitable, robustness becomes a fundamental issue in the models considered. Therefore it is established, in this thesis, that under relatively large parameter perturbations and bounded delay, impulsive synchronization still shows very desired behaviour. In fact, criteria for robustness of this particular type of synchronization are derived for both cases, especially in the case of time delay, where sufficient conditions for the synchronization error to be equi-attractivity in the large, are derived and an upper bound on the delay terms is also obtained in terms of the other parameters of the systems involved. The theoretical results, described above, regarding impulsive synchronization, are reconfirmed numerically. This is done by analyzing the Lyapunov exponents of the error dynamics and by showing the simulations of the different models discussed in each case. The application of the theory of synchronization, in general, and impulsive synchronization, in particular, to communication security, is also presented in this thesis. A new impulsive cryptosystem, called induced-message cryptosystem, is proposed and its properties are investigated. It was established that this cryptosystem does not require the transmission of the encrypted signal but instead the impulses will carry the information needed for synchronization and for retrieving the message signal. Thus the security of transmission is increased and the time-frame congestion problem, discussed in the literature, is also solved. Several other impulsive cryptosystems are also proposed to accommodate more solutions to several security issues and to illustrate the different properties of impulsive synchronization. Finally, extending the applications of impulsive synchronization to employ spatiotemporal chaotic systems, generated by partial differential equations, is addressed. Several possible models implementing this approach are suggested in this thesis and few questions are raised towards possible future research work in this area.
436

Synthèse croisée de régulateurs et d'observateurs pour le contrôle robuste de la machine synchrone / Cross-synthesis of controler and observer parameters for robust control of synchronous drive

Carrière, Sébastien 28 May 2010 (has links)
Cette étude se concentre sur la synthèse de lois de commande de servo-entraînements accouplés à une charge flexible à paramètres incertains avec l’unique mesure de la position du moteur. La loi de commande a pour but de minimiser les effets de ces variations tout en gardant la maîtrise d’un cahier des charges de type industriel (temps de réponse, dépassement, simplicité d’implantation et de synthèse). De ce fait, un contrôleur et un observateur sont implantés. Un contrôleur de type retour d’état avec une minimisation d’un critère linéaire quadratique assurant un placement du pôle dominant est associé à un observateur de type Kalman. Ces deux structures utilisent des méthodologies classiques de synthèse : placement de pôles et choix de pondération des matrices de Kalman. Pour ce dernier, deux stratégies sont abordées. La première utilise les matrices de pondération diagonale standard. De nombreux degrés de liberté sont disponibles et donnent de bons résultats. La seconde défini la matrice des bruits d’état avec la variation de la matrice dynamique du système. Le nombre de degrés de liberté est réduit, les résultats restent similaires à la stratégie précédente, mais la synthèse est simplifiée. Ceci permet d’obtenir une méthode n’exigeant que peu d’investissement théorique de la part d’un ingénieur mais non robuste. Pour ceci, la méthode de micro-analyse caractérisant la stabilité robuste est appliquée en parallèle à un algorithme évolutionnaire autorisant une synthèse, plus rapide et plus précise qu’un opérateur humain. Cette méthode complète permet de voir les avantages d’une synthèse croisée de l’observateur et du correcteur au lieu d’une synthèse séparée. En effet, le placement optimal des dynamiques de commande et d’observation dans le cadre des systèmes à paramètres variants ne suit plus une stratégie classique découplée. Ici, les dynamiques se retrouvent couplées voire meme inversées (dynamique de la commande inférieure à celle de l’observateur). Des résultats expérimentaux corroborent les simulations et permettent d’expliquer les effets des observateurs et régulateurs sur le comportement du système. / This thesis is performing a study on the law control synthesis for PMSM direct driving to a load having its mechanical parameters variant. Furthermore, only the motor position is sensored. The control law aim is to minimize the eects of these variations while keeping the performance inside industrial specifications (response time at 5%, overshoot, implementation and synthesis simplicity). As a result, an observer is programmed jointly with a controller. A state feedback controller deduced from a linear quadratic minimization is associated with a Kalman observer. These both structures employ standard method definitions : poles placement and arbitrary weight of Kalman matrices choice. Two definitions strategies are employed for the observer. The first is the classical arbitrary weights choice. A lot of degrees of freedom are accessible and allow this observer to impose a good behaviour to the system. The second defines the system dynamic matrix variation as the state space noise matrix. The number of degrees of freedom decreases dramatically. However the behaviour is kept as well as the previous case. This method is then easy to understand for an engineer, gives good result but non robust in an automatic sense. Consequently, an automatic study on robustness, the micro- analysis, is added to this control definition for theoretically checking. In parallel with the study robustness, an evolutionnary algorithm leads to a quicker and more accurate synthesis than a human operator. Indeed, in the case of systems having variant parameters, the optimal dynamics choice for the controller and the observer is not following the classical way. The dynamics are coupled or even mirrored ( the controller dynamic is slower than the observer one). At the end, experimental results allow to understand the way that observer or controller operate on the system.
437

Optimalno i suboptimalno podešavanje parametara robusnih linearnih regulatora necelog reda / Optimal and suboptimal parameter tuning of robust, linear controllers of noninteger order

Jakovljević Boris 14 July 2015 (has links)
<p>Rad je posvećen robusnom upravljanju sistemima čiji je linearni regulator i/ili dinamika necelog reda, kao i upravljačkim problemima gde regulator necelog reda u sebi poseduje i linearnu i nelinearnu dinamiku, a koji upravlja procesima čija dinamika može i linearna i nelinearna.</p> / <p>The thesys is dedicated to robust control systems problems with linear<br />controllers and/or process dynamics of noninteger order, as well as control<br />issues with combination of linear and nonlinear controllers of noninteger<br />order that control either linear or nonlinear systems.</p>
438

Développement des bétons autoplaçants à faible teneur en poudre, Éco-BAP: formulation et performance / Development of low-powder self-consolidating concrete, Eco-SCC: design and performance

Esmaeilkhanian, Behrouz January 2016 (has links)
Abstract : Although concrete is a relatively green material, the astronomical volume of concrete produced worldwide annually places the concrete construction sector among the noticeable contributors to the global warming. The most polluting constituent of concrete is cement due to its production process which releases, on average, 0.83 kg CO[subscript 2] per kg of cement. Self-consolidating concrete (SCC), a type of concrete that can fill in the formwork without external vibration, is a technology that can offer a solution to the sustainability issues of concrete industry. However, all of the workability requirements of SCC originate from a higher powder content (compared to conventional concrete) which can increase both the cost of construction and the environmental impact of SCC for some applications. Ecological SCC, Eco-SCC, is a recent development combing the advantages of SCC and a significantly lower powder content. The maximum powder content of this concrete, intended for building and commercial construction, is limited to 315 kg/m[superscript 3]. Nevertheless, designing Eco-SCC can be challenging since a delicate balance between different ingredients of this concrete is required to secure a satisfactory mixture. In this Ph.D. program, the principal objective is to develop a systematic design method to produce Eco-SCC. Since the particle lattice effect (PLE) is a key parameter to design stable Eco-SCC mixtures and is not well understood, in the first phase of this research, this phenomenon is studied. The focus in this phase is on the effect of particle-size distribution (PSD) on the PLE and stability of model mixtures as well as SCC. In the second phase, the design protocol is developed, and the properties of obtained Eco-SCC mixtures in both fresh and hardened states are evaluated. Since the assessment of robustness is crucial for successful production of concrete on large-scale, in the final phase of this work, the robustness of one the best-performing mixtures of Phase II is examined. It was found that increasing the volume fraction of a stable size-class results in an increase in the stability of that class, which in turn contributes to a higher PLE of the granular skeleton and better stability of the system. It was shown that a continuous PSD in which the volume fraction of each size class is larger than the consecutive coarser class can increase the PLE. Using such PSD was shown to allow for a substantial increase in the fluidity of SCC mixture without compromising the segregation resistance. An index to predict the segregation potential of a suspension of particles in a yield stress fluid was proposed. In the second phase of the dissertation, a five-step design method for Eco-SCC was established. The design protocol started with the determination of powder and water contents followed by the optimization of sand and coarse aggregate volume fractions according to an ideal PSD model (Funk and Dinger). The powder composition was optimized in the third step to minimize the water demand while securing adequate performance in the hardened state. The superplasticizer (SP) content of the mixtures was determined in next step. The last step dealt with the assessment of the global warming potential of the formulated Eco-SCC mixtures. The optimized Eco-SCC mixtures met all the requirements of self-consolidation in the fresh state. The 28-day compressive strength of such mixtures complied with the target range of 25 to 35 MPa. In addition, the mixtures showed sufficient performance in terms of drying shrinkage, electrical resistivity, and frost durability for the intended applications. The eco-performance of the developed mixtures was satisfactory as well. It was demonstrated in the last phase that the robustness of Eco-SCC is generally good with regards to water content variations and coarse aggregate characteristics alterations. Special attention must be paid to the dosage of SP during batching. / Résumé : Même si le béton est un matériau relativement vert, le volume astronomique de béton produit à travers le monde chaque année met le secteur de la construction en béton parmi les contributeurs important au réchauffement climatique. Le constituant le plus polluant du béton est le ciment en raison de son processus de production qui dégage, en moyenne, 0,83 kg de CO[indice inférieur 2] par kg de ciment. Le béton autoplaçant (BAP), un type de béton qui peut remplir le coffrage sans vibration externe, est une technologie qui peut offrir une solution aux problèmes de développement durable de l'industrie du béton. Cependant, toutes les exigences de la maniabilité du BAP proviennent d'une teneur en poudre plus élevé (par rapport au béton conventionnel), ce qui peut augmenter le coût de la construction et de l'impact environnemental du BAP pour certaines applications. Le BAP écologique, Éco-BAP, est un développement récent combinant les avantages du BAP tout en ayant une teneur en poudre significativement plus faible. La teneur en poudre maximale de ce béton, destinée à la construction du bâtiment et aux applications commerciales, est limitée à 315 kg/m[indice supérieur 3]. Néanmoins, la conception de l’Éco-BAP peut être difficile, car un équilibre délicat entre les différents ingrédients de ce béton est nécessaire pour garantir un mélange satisfaisant. Dans ce programme de doctorat, l'objectif principal est de développer une méthode systématique pour la formulation de l’Éco-BAP. Puisque l'effet de groupe des particules (EGP) est un paramètre clé pour la conception des mélanges l’Éco-BAP stables, et que ce phénomène est peu connu, dans la première phase de cette recherche, l’EGP est étudié. Cette partie se concentre sur l'influence de la granulométrie sur l’EGP et la stabilité des mélanges de modèle ainsi que des BAPs. Dans la deuxième phase, le protocole de formulation est développé, et les propriétés des mélanges obtenus, à l’état frais ainsi que l’état durcis, sont évaluées. Étant donné que l'évaluation de la robustesse est cruciale pour la production du béton à grande échelle, dans la dernière phase de ce travail, la robustesse d'un des mélanges les plus performants de la Phase II est examinée. Basé sur les résultats obtenus, nous constatons que l'augmentation de la fraction volumique d'une classe mène à une meilleure stabilité de cette classe. Cela contribue également à une EGP supérieure du squelette granulaire et à une stabilité plus élevée du système. Il a été montré qu'une granulométrie continue dans lequel la fraction volumique de chaque classe est plus grande que la classe consécutive plus grossière peut augmenter l’EGP. En utilisant une telle granulométrie, la fluidité d’un mélange du BAP pourrait être augmentée sans compromettre la résistance à la ségrégation. Un indice de prédiction du potentiel de la ségrégation de particules suspendues dans un fluide à seuil a été proposé. Dans la deuxième phase de la thèse, une méthode de conception en cinq étapes pour l’Éco-BAP a été développée. Le protocole de formulation commence par la détermination des teneurs en poudre et de l'eau, suivie par l'optimisation des fractions volumiques du sable et des gros granulats selon un modèle idéal de granulométrie (Funk et Dinger). La composition de poudre est optimisée dans la troisième étape afin de minimiser la demande en eau tout en garantissant une performance adéquate à l'état durci. Le dosage du superplastifiant (SP) est déterminé dans l’étape suivante. La dernière étape s’agit d’évaluer le potentiel du réchauffement climatique des mélanges développés. Les mélanges de l’Éco-BAP optimisés répondent à toutes les exigences à l'état frais pour le BAP. La résistance à la compression à 28 jours de ces mélanges est dans la fourchette cible de 25 à 35 MPa. En outre, les mélanges montrent des performances suffisantes en termes de retrait de séchage, la résistivité électrique, et la résistance contre gel-dégel pour les applications visées. La performance écologique des Éco-BAPs produis a été satisfaisante. Il a été démontré dans la dernière phase que la robustesse de l'Éco-BAP est généralement bonne en ce qui concerne les variations de teneur en eau et les changements de propriétés des gros granulats. Une attention particulière doit être accordée au dosage du SP pendant le malaxage.
439

L’interprétation mécaniste des communautés et des écosystèmes

Degré, Karl 01 1900 (has links)
Les concepts d’écosystèmes et de communautés sont centraux aux explications en écologie ainsi qu’à plusieurs débats environnementaux. Cependant, depuis l’introduction de ces concepts, leur statut ontologique est controversé (c'est-à-dire un écosystème a-t-il une existence à part entière au-delà de l’existence de ses parties constituantes?). Alors que certains favorisent une interprétation épistémique de ces concepts, d’autres défendent plutôt que ces unités écologiques fassent partie intégrante du monde naturel. Dans le cadre de ce mémoire, j’argumente que la meilleure manière d’appréhender cette question ontologique est de comprendre que les communautés ainsi que les écosystèmes sont des mécanismes. Plus précisément, je propose que les écosystèmes et les communautés soient des ensembles d’entités et d’activités organisés de manière à exhiber de manière régulière des phénomènes précis. Les entités sont les composantes de ces mécanismes (ex : espèce, niche écologique) alors que les activités sont les relations causales produisant des changements (ex : photosynthèse, prédation). Finalement, les phénomènes que ces mécanismes produisent sont les propriétés émergentes propres au niveau d’organisation des communautés et des écosystèmes (ex. : pH, Biomasse). En utilisant l’approche manipulationniste développée par James Woodward (2003, 2004, 2010), j’argumente qu’il est possible d’identifier les composantes causales des écosystèmes et des communautés ainsi que leurs relations. Puisqu’il est possible de manipuler empiriquement et de manière contrefactuelle de différentes manières les communautés et les écosystèmes (Penn 2003; Swenson et Wilson 2000), je conclus que nous pouvons affirmer de manière robuste (Wimsatt 2007) que ces entités existent réellement. / The concepts of ecosystem and community are central to ecological explanations. However, since the introduction of these concepts, their ontological status is controversial. Taking as a starting point the mechanistic explanatory theories in philosophy of science, I argue that ecosystems and communities are mechanisms. More precisely, I suggest that they are entities and activities organized in such a way as to exhibit regular and precise phenomenons (Machamer, Darden, Craver 2000). While entities are the components of the mechanisms (ex: species, ecological niche), activities are the causal relations that produce changes (ex: photosynthesis, predation). Finally, the interaction of the entities and the activities produce emerging properties that are unique to the community and ecosystem level (ex: pH, biomass). By using the manipulationniste theory of Woodward (2003) and the experimental results of Swenson and Wilson (2000), I argue that it is possible to identify the causal components of the ecosystems and communities and their relations to one another. Since it is possible to manipulate empirically and counterfactually ecosystems and communities, I conclude that they are real entities.
440

Politiques de robustesse en réseaux ad hoc / Robustness policies in mobile ad hoc networks

Bagayoko, Amadou Baba 11 July 2012 (has links)
Les réseaux sans fil sont sujets à des perturbations voire des pannes de liens et de noeuds en raison des caractéristiques intrinsèques de leur support de communication ; ces pannes sont aggravées par les particularités de relayage et de mobilité des noeuds dans les réseaux ad hoc. Ces réseaux requièrent donc la conception et la mise oeuvre des protocoles robustes au niveau de toutes les couches protocolaires. Dans cette thèse, nous choisissons une approche de robustesse pour améliorer les performances des communications dans un réseau mobile ad hoc. Nous proposons et étudions deux architectures de protection (protection par une analyse prédictive et protection par redondance de routes) qui sont couplées avec une restauration de niveau routage. Concernant la phase de détection, le protocole de routage utilise les notifications de niveau liaison pour détecter les pannes de liens. La première solution repose sur un protocole de routage réactif unipath dont le critère de sélection de routes est modifié. L’idée est d’utiliser des métriques capables de prédire l’état futur des routes dans le but d’améliorer leur durée de vie. Pour cela, deux métriques prédictives reposant sur la mobilité des noeuds sont proposées : la fiabilité des routes et une combinaison fiabilité-minimum de sauts. Pour calculer ces métriques prédictives, nous proposons une méthode analytique de calcul de la fiabilité de liens entre noeuds. Cette méthode prend compte le modèle de mobilité des noeuds et les caractéristiques de la communication sans fil notamment les collisions inter-paquets et les atténuations du signal. Les modèles de mobilité étudiés sont les modèles Random Walk et Random Way Point. Nous montrons l’impact de ces métriques sur les performances en termes de taux de livraison de paquets, de surcoût normalisé et de ruptures de routes. La seconde solution est une protection par redondance de routes qui s’appuie sur un protocole de routage multipath. Dans cette architecture, l’opération de recouvrement consiste soit à un basculement sur une route secondaire soit à une nouvelle découverte. Nous montrons que la redondance de routes améliore la robustesse de la communication en réduisant le temps de restauration. Ensuite, nous proposons une comparaison analytique entre les différentes politiques de recouvrement d’un protocole multipath. Nous en deduisons qu’un recouvrement segmenté donne les meilleurs résultats en termes de temps de restauration et de fiabilité / Due to the unreliability characteristics of wireless communications, and nodes mobility, Mobile Ad hoc Networks (MANETs) suffer from frequent failures and reactivation of links. Consequently, the routes frequently change, causing significant number of routing packets to discover new routes, leading to increased network congestion and transmission latency. Therefore, MANETs demand robust protocol design at all layers of the communication protocol stack, particularly at the MAC, the routing and transport layers. In this thesis, we adopt robustness approach to improve communication performance in MANET. We propose and study two protection architectures (protection by predictive analysis and protection by routes redundancy) which are coupled with a routing level restoration. The routing protocol is responsible of the failure detection phase, and uses the mechanism of link-level notifications to detect link failures. Our first proposition is based on unipath reactive routing protocol with a modified route selection criterion. The idea is to use metrics that can predict the future state of the route in order to improve their lifetime. Two predictive metrics based on the mobility of nodes are proposed : the routes reliability and, combining hop-count and reliability metrics. In order to determine the two predictive metrics, we propose an analytical formulation that computes link reliability between adjacent nodes. This formulation takes into account nodes mobility model and the the wireless communication characteristics including the collisions between packets and signal attenuations. Nodes mobility models studied are Random Walk and Random Way Point. We show the impact of these predictive metrics on the networks performance in terms of packet delivery ratio, normalized routing overhead and number of route failures. The second proposition is based on multipath routing protocol. It is a protection mechanism based on route redundancy. In this architecture, the recovery operation is either to switch the traffic to alternate route or to compute a new route. We show that the routes redundancy technique improves the communication robustness by reducing the failure recovery time. We propose an analytical comparison between different recovery policies of multipath routing protocol. We deduce that segment recovery is the best recovery policy in terms of recovery time and reliability

Page generated in 0.0635 seconds