Spelling suggestions: "subject:"uncertainties"" "subject:"incertainties""
31 |
Uncertainties in Mobile Learning applications : Software Architecture ChallengesGil de la Iglesia, Didac January 2012 (has links)
The presence of computer technologies in our daily life is growing by leaps and bounds. One of the recent trends is the use of mobile technologies and cloud services for supporting everyday tasks and the sharing of information between users. The field of education is not absent from these developments and many organizations are adopting Information and Communication Technologies (ICT) in various ways for supporting teaching and learning. The field of Mobile Learning (M-Learning) offers new opportunities for carrying out collaborative educational activities in a variety of settings and situations. The use of mobile technologies for enhancing collaboration provides new opportunities but at the same time new challenges emerge. One of those challenges is discussed in this thesis and it con- cerns with uncertainties related to the dynamic aspects that characterized outdoor M-Learning activities. The existence of these uncertainties force software developers to make assumptions in their developments. However, these uncertainties are the cause of risks that may affect the required outcomes for M-Learning activities. Mitigations mechanisms can be developed and included to reduce the risks’ impact during the different phases of development. However, uncertainties which are present at runtime require adaptation mechanisms to mitigate the resulting risks. This thesis analyzes the current state of the art in self-adaptation in Technology-Enhanced Learning (TEL) and M-Learning. The results of an extensive literature survey in the field and the outcomes of the Geometry Mobile (GEM) research project are reported. A list of uncertainties in collaborative M-Learning activities and the associated risks that threaten the critical QoS outcomes for collaboration are identified and discussed. A detailed elaboration addressing mitigation mechanisms to cope with these problems is elaborated and presented. The results of these efforts provide valuable insights and the basis towards the design of a multi-agent self-adaptive architecture for multiple concerns that is illustrated with a prototype implementation. The proposed conceptual architecture is an initial cornerstone towards the creation of a decentralized distributed self-adaptive system for multiple concerns to guarantee collaboration in M-Learning.
|
32 |
Core Acquisition Management in Remanufacturing : Current Status and Modeling TechniquesWei, Shuoguo January 2015 (has links)
Remanufacturing is an important product recovery option that benefits our sustainable development. Cores, i.e. the used products/parts, are essential resources for remanufacturing. Without cores, there will not be any remanufactured products. Challenges in the core acquisition process are mainly caused by the uncertainties of: return volume, timing and core quality. Core Acquisition Management actively attempts to reduce these uncertainties and achieve a better balance of demand and return for the remanufacturers. The aim of this dissertation is to extend the knowledge of Core Acquisition Management in remanufacturing, by investigating the current status of research and industrial practice, and developing quantitative models that assist the decision making in the core acquisition process. In the dissertation, a literature review is firstly conducted to provide an overview about the current research in Core Acquisition Management. Possible further research interests, for example, more studies based on non-hybrid remanufacturing systems and imperfect substitution assumption are suggested. Through an industrial survey carried out in a fast developing remanufacturing market - China, environmental responsibility and ethical concerns, customer orientation and strategic advantage are identified as the most important motives for the remanufacturers, while customer recognition is their most serious barrier at present. Suggestions for further improving the Chinese remanufacturing industry from the policy-makers’ perspective are provided. After the above investigation, mathematical models are then developed to assist the acquisition decisions in two aspects: to deal with the uncertainties of return volume and timing, and to deal with the uncertainties of core quality. Acquisition decision about volume and timing is firstly studied from a product life cycle perspective, where the demands for remanufactured products and the core availability change over time. According to industrial observations, the remanufacturing cost decreases with respect to its core inventory. Using optimal control theory, core acquisition and remanufacturing decisions are derived to maximize the remanufacturer's profit. It is found that besides a simple bang-bang type control policy (either collecting as much as possible, or nothing), a special form of synchronizing policy (adjusting the core collection rate with demand rate) also exists. Furthermore, the acquisition decision depends greatly on the valuation of cores, and Real Option Valuation approaches are later used to capture the value of flexibility provided by owning cores when different aspects of remanufacturing environment are random. More specifically, the value of disposing a core earlier is investigated when the price of remanufactured product is uncertain, and the impact of the correlation between stochastic demand and return is also studied. To deal with the uncertainties of core quality, refund policies with different numbers of quality classes are studied. Under the assumption of uniformly distributed quality, analytical solutions for these refund policies are derived. Numerical examples indicate that the customers’ valuation of cores is an important factor influencing the return rates and the remanufacturer’s profit. Refund policies with a small number of quality classes could already bring major advantages. Credit refund policies (without deposits) are included for comparisons. In addition, within a game theory framework, the trade-off of two types of errors of the quality inspection in a deposit-refund policy is studied. The salvage values of different cores show great influences on the remanufacturer’s policy choices. The value of information transparency about the inspection errors are studied under different conditions. Interestingly, the customer may actually return more low quality cores when the inspection accuracy is improved.
|
33 |
Méthodes et outils ensemblistes pour le pré-dimensionnement de systèmes mécatroniques / Set-membership methods and tools for the pre-design of mechatronic systemsRaka, Sid-Ahmed 21 June 2011 (has links)
Le pré-dimensionnement se situe en amont du processus de conception d'un système : à partir d'un ensemble d'exigences, il consiste à déterminer un ensemble de solutions techniques possibles dans un espace de recherche souvent très grand et structuré par une connaissance partielle et incertaine du futur système et de son environnement. Bien avant de penser au choix définitif des composants et au dimensionnement précis du système complet, les concepteurs doivent s'appuyer sur un premier cahier des charges, des modélisations des principaux phénomènes et divers retours d'expériences pour formaliser des contraintes, faire des hypothèses simplificatrices, envisager diverses architectures et faire des choix sur des données imprécises (i.e. caractérisées sous la forme d'intervalles, de listes de valeurs possibles, etc…). Les choix effectués lors du pré-dimensionnement pouvant être très lourds de conséquences sur la suite du développement, il est primordial de détecter au plus tôt d'éventuelles incohérences et de pouvoir vérifier la satisfaction des exigences dans un contexte incertain. Dans ce travail, une méthodologie de vérification des exigences basée sur l'échange de modèles ensemblistes entre donneurs d'ordre et fournisseurs est proposée. Elle s'inscrit dans le cadre d'un paradigme de conception basé sur la réduction d'incertitudes. Après un travail portant sur la modélisation des systèmes mécatroniques, une attention particulière est portée à la prise en compte d'incertitudes déterministes sur des grandeurs continues : des techniques basées sur l'analyse par intervalles telles que la satisfaction de contraintes (CSP), des calculs d'atteignabilité pour des modèles dynamiques de connaissances ou encore l'identification de modèles de comportements ensemblistes sont ainsi mis en œuvre et développés afin d'outiller la méthodologie proposée et contribuer à répondre à l'objectif d'une vérification à couverture garantie. / The pre-sizing takes place upstream to the process of designing a system: from a set of requirements, it consists in determining a set of possible technical solutions in an often very large search space which is structured by the uncertain and partial knowledge available about the future system and its environment. Long before making the final choice and the final sizing of the system components, the designers have to use specifications, models of the main phenomena, and experience feedbacks to formalize some constraints, make simplifying assumptions, consider various architectures and make choices based on imprecise data (i.e. intervals, finite sets of possible values, etc…). The choices made during the pre-sizing process often involving strong commitments for the further developments, it is very important to early detect potential inconsistencies and to verify the satisfaction of the requirements in an uncertain context. In this work, a methodology based on the exchange of set-membership models between principals and suppliers is proposed for the verification of requirements. This methodology is fully consistent with a design paradigm based on the reduction of uncertainties. After a work dedicated to the modeling of mechatronic systems, a special attention is paid to dealing with deterministic uncertainties affecting continuous values: some techniques based on interval analysis such as constraint satisfaction (interval CSP), reachability computations for knowledge dynamic models or identification of set-membership behavioral models are used and developed, so providing a set of tools to implement the proposed methodology and contribute to reach the goal of a verification with a full and guaranteed coverage.
|
34 |
The jet energy scale uncertainty derived from gamma-jet events for small and large radius jets and the calibration and performance of variable R jets with the ATLAS detectorKogan, Lucy Anne January 2014 (has links)
In this thesis the jet energy scale uncertainty of small and large radius jets at the ATLAS detector is evaluated in-situ using gamma-jet events. The well calibrated photon in the gamma-jet events is used to probe the energy scale of the jets. The studies of the jet energy scale of small radius jets are performed using 4.7 fb<sup>-1</sup> of data collected at sqrt{s} = 7 TeV in 2011. The gamma-jet methods which were developed are then adapted and applied to large radius jets, using 20.3 fb^-1 of data collected at sqrt{s} = 8 TeV in 2012. The new jet energy scale uncertainties are found to be ~1 % for |eta| < 0.8, rising to 2-3 % for |eta| > 0.8. These uncertainties are significantly lower than the 3-6 % precision which has previously been achieved at ATLAS using track jets as a reference object. Due to the increase in precision, uncertainties due to pile-up and the topology of the jet also had to be evaluated. The total energy scale uncertainties for large radius jets are reduced by ~1-2 % (0.5-1 %) for |eta| < 0.8 (> 0.8). This reduction will be beneficial to analyses using large radius jets and it is specifically shown to benefit the t-tbar resonance search in the semi-leptonic channel. The t-tbar search looks for events with two top quarks in the final state, where one decays leptonically and the other hadronically. The hadronically decaying top quark is reconstructed using a large radius jet, and the jet energy scale uncertainty is a dominant source of uncertainty in the analysis. In addition to the studies of the jet energy scale of large radius jets, the first derivation of a calibration, and jet energy scale uncertainties derived with gamma-jet events, are shown for Variable R jets. The Variable R jet algorithm is a new type of jet algorithm with a radius that is inversely proportional to the size of the jet, making it useful for the study of high momentum top quarks. It is shown that similar methods can be used to calibrate and assess the uncertainties of Variable R jets as are used for standard, fixed radius jets at the ATLAS detector, although some adaptations will be necessary. The studies provide a basis for the calibration of Variable R jets in the future.
|
35 |
Be lean to be resilient : Setting capabilities for turbulent timesBirkie, Seyoum Eshetu January 2015 (has links)
Businesses globally are challenged to innovate their operations strategies and practices towards tighter delivery times, better quality and cheaper prices to remain profitable in addition to managing unpredictable circumstances well in today’s turbulent business environment. They often have to deal with the apparent paradox of advancing efficiency-fostering approaches such as lean production, and enhancing operational resilience against unanticipated disruptions. The purpose of this study is to investigate whether and how practices in seemingly contradicting paradigms in operations management can be utilised to attain a better competitive position in the face of uncertainties. This thesis is comprised of ‘modules’ of studies designed to systematically address the three research questions. This was necessary due to the different maturity level of the concepts brought together. Predominantly qualitative mixed-method approach was used for the overall research with some quantitative analysis included. The critical incident technique, case study and Bayesian inference were used in the different studies (papers). Operational resilience is characterised in terms of five core functions: sense, build, reconfigure, re-enhance, and sustain (RQ1). Resilience is also operationalised using routine practices that are bundled into internal/external, proactive/reactive dimensions of capabilities that positively influence performance upon recovery from disruption. An analysis showing that lean practice bundles lead to better operational performance under high uncertainty context is also done in this thesis (RQ2). Finally, operational resilience (based on routine practices that form the core functions) was found to have stronger synergies than trade-off with lean (based on practice bundles) in times of turbulence (RQ3). This thesis extends the resource-based view to high uncertainty contexts through empirical evidence and shows that resilience (dynamic) capabilities can be built from practices that firms normally employ; the capabilities are sources of better performance and competitive advantages in turbulent business environments. The thesis contributes to the discussion on the paradox of lean and operational resilience based approaches in the same context; lean practices bundles lend themselves to synergy with resilience capabilities, and leverage competitive gains in turbulent times. Practically, findings of this thesis suggest that companies need not abandon their lean implementation to become more resilient. In fact, it shows that lean implementation should be extended to address value chain processes beyond the shop floor for integrative removal of wastes, while being able to flexibly mitigate disruptions. / La sfida della competitività nei mercati globali dipende in larga parte dalla capacità delle imprese di innovare le loro operations per ottenere termini di consegna sempre più stretti, maggior qualità a prezzi sempre più competitivi; tutto questo in un contesto industriale e socio-economico sempre più incerto e turbolento. Oggi le imprese sono chiamate a prendere decisioni e ad adottare dei modelli di business dagli effetti contrastanti, come ad esempio l’adozione di pratiche che enfatizzano risultati di efficienza produttiva (i.e. lean production) a fianco di strategie e soluzioni che mirano ad accrescere la capacità del sistema di adattarsi dinamicamente ad eventi perturbanti (resilienza), esterni o interni all’organizzazione. Lo scopo di questa ricerca è quello di investigare se e come l'adozione di pratiche potenzialmente contrastanti nell'ambito della gestione delle operations, possono essere utilizzate per mantenere e migliorare la propria posizione competitiva in contesti di forte incertezza e turbolenza dei mercati. La ricerca si compone di una serie di "moduli", ovvero di singoli studi progettati per affrontare sistematicamente e organicamente le tre domande di ricerca fondamentali, la cui risposta conduce alla proposta di tesi. Questa impostazione si è rivelata necessaria a causa del diverso livello di maturità dei concetti studiati e sviluppati nella tesi. Anche la metodologia di ricerca rispecchia le diverse esigenze e peculiarità dei vari aspetti studiati e per questo è stata definita seguendo un approccio misto, in cui metodi di tipo qualitativo sono affiancati da analisi quantitative che implementano tecniche statistiche. In particolare, nei diversi “moduli” (paper) si utilizzano: la critical incident technique, diverse metodiche di studi di caso, e inferenza Bayesiana. La resilienza operativa è stata caratterizzata secondo cinque funzioni principali (core functions): sense, build, reconfigure, re-enhance, e sustain (RQ1). Ciascuna di queste è tradotta a livello operativo attraverso procedure e pratiche stabili (routine) - interne/esterne, proattive/reattive - che sono in grado di influenzare positivamente le prestazioni a seguito di un evento perturbante. Attraverso la ricerca, viene analizzato l’effetto positivo che differenti pratiche lean (lean practice bundles) inducono sulle prestazioni operative in condizioni di incertezza (RQ2). Infine, un’analisi bayesiana sui parametri tipici di un campione selezionato di eventi incidentali a carico di organizzazioni e supply chain globali ha rivelato che tra resilienza operativa (implementata attraverso specifiche routine) e lean production (implementata attraverso specifiche lean practice bundles) esistono fenomeni sinergici più forti dei meccanismi di trade-off, quando valutati in contesti turbolenti (RQ3). I risultati della tesi contribuiscono ad ampliare e rafforzare un approccio teorico contingent resource-based view all’analisi delle organizzazioni che operano in regimi di forte incertezza (complessità e dinamicità); il contributo originale si concentra in particolar modo nel fornire evidenza empirica che le capacità di resilienza di una organizzazione (dynamic capabilities) possono essere costruite su processi e routine normalmente eseguite dalle imprese. Ove disponibili, queste capacità sono usate come fonte di miglioramento prestazionale e per l’ottenimento di un vantaggio competitivo in contesti turbolenti. Ulteriori evidenze supportano la tesi che un’ampia gamma di lean practices possono essere usate in maniera sinergica per un ulteriore rafforzamento della resilienza operativa. Dal punto di vista pratico e in contrasto con parte della letteratura esistente, la tesi offre ai manager industriali solidi argomenti per non abbandonare la propria strategia lean o limitare i propri obiettivi di efficienza allo scopo di conseguire una maggiore resilienza operativa. Si dimostra infatti che quando l’adozione di partiche lean viene estesa ad una porzione sempre più ampia della value chain, alla conseguente riduzione degli sprechi si associa anche una maggior flessibilità nella gestione di eventi perturbanti o distruttivi. / I dagens turbulenta affärsklimat står företag världen över inför utmaningen att på ett effektivt sätt hantera oförutsägbara händelser och samtidigt förnya sina verksamheter med syfte att uppnå kortare leveranstider, bättre kvalitet och ökad lönsamhet. I dessa ansträngningar möter företagen ofta det skenbara dilemmat av att vissa arbetssätt såsom lean produktion ställs i kontrast mot aktiviteter syftande till att skapa återhämtningsförmåga, dvs angreppssätt och rutiner för att hantera oväntade störningar (operational resilience). Syftet med denna avhandling är att undersöka om och hur dessa två olika arbetssätt, med till synes motstridiga paradigm, kan användas för att uppnå ökad konkurrenskraft för företag verksamma under osäkra marknadsförhållanden. Avhandlingen består av fem artiklar och syftar till att, på ett systematiskt sätt, avhandla tre övergripande forskningsfrågor. Uppdelningen i artiklar motiveras av olikheter i mognadsgrad hos de båda grundbegreppen. En kombination av forskningsmetoder har använts. Den övergripande forskningsstrategin har varit kvalitativ och fallstudiebaserad. Även kritiska händelse metoden, (Critical Incident Technique, CIT) och kvantitativa metoder såsom statistisk analys och Bayesiansk inferens har använts som komplement i några av artiklarna. Resultaten visar att operativ återhämtningsförmåga kan beskrivas i termer av fem kärnfunktioner: uppfatta, formera, konfigurera, återförbättra och bibehålla (RQ1). Resultaten visar även att återhämtningsförmågan kan operationaliseras såsom kombinationer av sammansatta organisatoriska rutiner (practice bundles) vilka kan karaktäriseras i termer av interna/externa och proaktiva/reaktiva dimensioner. Kombinationer av dessa sammansatta organisatoriska rutiner har identifierats vilka både samverkar och förstärker varandra i situationer av störning och efterföljande återhämtning. Vidare visas att implementering av lean rutiner leder till ökad effektivitet i situationer karakteriserade av hög osäkerhet (RQ2). Avslutningsvis visar resultaten att återhämtningsförmåga och lean, operationaliserade som kärnfunktioner respektive sammansatta organisatoriska rutiner, har stark samverkan då det gäller att hantera störningar. Några sammansatta organisatoriska rutiner har dock en trade-off relation till vissa kärnfunktioner (RQ3) Ur ett teoretiskt perspektiv utökar avhandlingen det resursbaserade synsättet till att även inkludera företag som verkar under osäkra marknadsförhållanden. Resultaten visar att (dynamisk) återhämtningsförmåga kan byggas med hjälp av metoder som företagen normalt använder idag (sammansatta organisatoriska rutiner). Genom att omkonfigurera existerande förmågor och rutiner skapas en källa till ökad produktivitet och ökad konkurrenskraft. Således bidrar avhandlingen till diskussionen om det skenbara dilemmat av att en samtidig användning av strategier baserade på lean production och strategier fokuserande på återhämtningsförmåga (operational resilience) samverkar och förstärker varandra snarare än motverkar varandra. Avhandlingens praktiska implikation är att företag inte behöver överge sitt lean arbetssätt för att öka sin återhämtningsförmåga (operational resilience). I själva verket, bör företag utgå ifrån existerande lean arbetssätt och utvidga dessa till att även omfatta processer utanför den direkta tillverkningen. / <p>This thesis is produced as part of the EMJD Programme <em>European Doctorate in Industrial Management (EDIM) </em>funded by the European Commission, Erasmus Mundus Action 1.</p><p>EDIM is run by a consortium consisting of the industrial management departments of three institutions.</p><p>•KTH Royal Institute of Technology, Stockholm, Sweden</p><p>•Politecnico de Milano, POLIMI, Milan, Italy</p><p>•Universidad Politécnica de Madrid, UPM, Madrid, Spain</p><p>QC 20151105</p><p></p>
|
36 |
Identification de champs de propriétés élastiques fondée sur la mesure de champs : application sur un composite tissé 3D / Identification of elastic properties fields based on full-field measurement : application to a 3D woven compositeGras, Renaud 18 December 2012 (has links)
Depuis ces dernières décennies, les matériaux composites sont de plus en plus utilisés dans l'aéronautique. Notamment, les composites tissés 3D présentent des caractéristiques matériau hors-plan intéressantes par rapport aux stratifiés. Cette technologie est développée pour les aubes FAN des moteurs d'avion. La difficulté réside dans l'identification et la validation du modèle orthotrope élastique macroscopique du pied d'aube. En effet, l'hypothèse de séparation des échelles pour l'obtention des paramètres matériau macroscopique par homogénéisation n'est pas clairement vérifiée au sein du pied d'aube comportant plusieurs zones matériau. Le composite tissé 3D formant le pied d'aube est un matériau multi-échelle complexe. Les travaux de thèse ont donc été menés afin de proposer une identification des paramètres du modèle basée sur la mesure de champs de déplacements par Corrélation d'Images Numériques (CIN) et sur la méthode d'identification de recalage de modèles éléments finis (FEMU). Cette identification a pris en compte l'influence du bruit du capteur CCD présent sur les images servant à la CIN sur l'identification des paramètres matériau. Du fait du grand nombre de paramètres matériau à identifier et des éventuels couplages entre ceux-ci, il apparaît que quelques uns ne peuvent pas être identifier à travers l'essai de traction étudié. Par conséquent, une régularisation de la FEMU a été proposée basée sur la connaissance a priori des valeurs nominales et de leur incertitude. Celle-ci consiste en une pondération intelligente vis-à-vis des données issues de l'essai afin de faire tendre les paramètres non identifiables vers leur valeur nominale. Finalement, la qualité de l'identification a été quantifiée grâce aux incertitudes sur les paramètres matériau identifiés et grâce aux cartes de résidus d'identification basées sur les images. Ces cartes traduisent la capacité du champ de déplacement calculé par le modèle éléments finis identifié à corriger l'image déformée pour la recaler sur l'image de référence, images sur lesquelles la mesure par CIN est effectuée. Ces cartes de résidus et les incertitudes obtenues permettent ainsi de valider le modèle éléments finis proposé et le cas échéant de mettre en lumière ses insuffisances. Remettre en cause les valeurs nominales ou la modélisation (par exemple le zonage matériau) pour aboutir à une description compatible avec l’expérience reste du ressort de l’ingénieur. Le travail présenté ici lui permet d’éclairer au mieux ses choix. / In recent decades, composite materials are increasingly used in aerospace. Particularly, 3D woven composite present interesting out-of-plan characteristics compared to laminates. This technology is developed for FAN blades of aircraft engines. The difficulty lies in the identification and validation of the macroscopic elastic orthotropic model of the FAN blade root. Indeed, the assumption of scales separation to obtain the macroscopic parameters by homogenization is not clearly verified in the blade root containing multiple zones. The 3D woven composite forming the blade root is a complex and multi-scale material. The thesis work has been carried out to propose an identification of model parameters based on measurement of displacement fields with Digital Image Correlation (DIC) and the identification method of finite element model updating (FEMU) . This identification has taken into account the influence of noise, coming from the CCD sensor and present on the images utilized for the DIC, on the identification of material parameters. Because of the large number of material parameters to identify and the possible couplings between them, it appears that some of them can not be identified through the tensile test studied. Therefore, a regularization was proposed based on a priori knowledge of the nominal values ​-​-and their uncertainty. This consists in a smart balance regarding the data obtained through the test in order to force the unidentifiable parameters to their nominal value. Finally, the quality of the identification was quantified using the uncertainties on the identified material parameters and residual maps after identification based on images. These maps reflect the ability of the calculated displacement to match the corrected deformed image on the reference image, images with which the DIC is performed. These residual maps and uncertainties on material parameters allow to validate the finite element model and propose an appropriate tool to highlight its shortcomings. Challenging nominal or modeling (eg material zones) to achieve a description consistent with the experience remains the responsibility of the engineer. The work presented here allows him to enlighten its choices.
|
37 |
Propagation des incertitudes dans un modèle réduit de propagation des infrasons / Uncertainty propagation in a reduced model of infrasound propagationBertin, Michaël 12 June 2014 (has links)
La perturbation d’un système peut donner lieu à de la propagation d’onde. Une façon classique d’appréhender ce phénomène est de rechercher les modes propres de vibration du milieu. Mathématiquement, trouver ces modes consiste à rechercher les valeurs et fonctions propres de l’opérateur de propagation. Cependant, d’un point de vue numérique, l’opération peut s’avérer coûteuse car les matrices peuvent avoir de très grandes tailles. En outre, dans la plupart des applications, des incertitudes sont inévitablement associées à notre modèle. La question se pose alors de savoir s’il faut attribuer d’importantes ressources de calcul pour une simulation dont la précision du résultat n’est pas assurée. Nous proposons dans cette thèse une démarche qui permet à la fois de mieux comprendre l’influence des incertitudes sur la propagation et de réduire considérablement les coûts de calcul pour la propagation des infrasons dans l’atmosphère. L’idée principale est que tous les modes n’ont pas la même importance et souvent, seule une poignée d’entre eux suffit à décrire le phénomène sans perte notable de précision. Ces modes s’avèrent être ceux qui sont les plus sensibles aux perturbations atmosphériques. Plus précisément, l’analyse de sensibilité permet d’identifier les structures de l’atmosphère les plus influentes, les groupes de modes qui leur sont associés et les parties du signal infrasonore qui leur correspondent. Ces groupes de modes peuvent être spécifiquement ciblés dans un calcul de spectre au moyen de techniques de projection sur des sous-espace de Krylov, ce qui implique un gain important en coût de calcul. Cette méthode de réduction de modèle peut être appliquée dans un cadre statistique et l’estimation de l’espérance et de la variance du résultat s’effectue là aussi sans perte notable de précision et avec un coût réduit. / The perturbation of a system can give rise to wave propagation. A classical approach to understand this phenomenon is to look for natural modes of vibration of the medium. Mathematically, finding these modes requires to seek the eigenvalues and eigenfunctions of the propagation operator. However, from a numerical point of view, the operation can be costly because the matrices can be of very large size. Furthermore, in most applications, uncertainties are inevitably associated with our model. The question then arises as to whether we should allocate significant computational resources for simulation while the accuracy of the result is not guaranteed. We propose in this thesis an approach that allows both a better understanding of the influence of uncertainties on the propagation and a significant decrease of computational costs for infrasound propagation in the atmosphere. The main idea is that all modes do not have the same importance and only a few of them is often sufficient to account for the phenomenon without a significant loss of accuracy. These modes appear to be those which are most sensitive to atmospheric disturbances. Specifically, a sensitivity analysis is used to identify the most influential structures of the atmosphere, the associated groups of modes and their associated parts of the infrasound signal. These groups of modes can be specifically targeted in a spectrum calculation with the projection of the operator onto Krylov subspaces, that allows a significant decrease of the computational cost. This method of model reduction can be applied in a statistical framework as well and estimations of the expectation and the variance of the results are carried out without a significant loss of accuracy and still with a low cost.
|
38 |
Contrôle actif des vibrations en fraisage. / Control for vibration Phenomena in Mechanical Machining.Kochtbene, Feriel 21 December 2017 (has links)
Cette thèse commence avec un état de l’art des domaines d’études importants pour notre objectif (différentes techniques usuelles de réduction des vibrations en usinage, méthodes de contrôle actif) avant de valider le principe de contrôle actif du fraisage en se plaçant en repère fixe. On a alors développé un modèle d’état d’une poutre d’Euler Bernoulli perturbée en un point et corrigée en un autre via un actionneur piézoélectrique. Ce modèle a permis d’obtenir plusieurs compensateurs, suivant différentes stratégies de commande. Nous avons par la suite procédé, d’un point de vue expérimental, à l’étude sur un dispositif similaire à notre besoin d’un point de vue de l’actionnement et des ordres de grandeurs (amplification mécanique, gamme de fréquences etc.). Les stratégies de commande robustes que nous avons développé pour pouvoir atténuer les déplacements vibratoires de cette poutre ont conduit à des résultats concluants présentés dans le même chapitre, d’abord en simulation (qui nous a permis une étude comparative), avec ou sans la présence du processus d’usinage, puis expérimentalement. La robustesse de ces stratégies de commande a été étudiée (en simulation) en ajoutant des incertitudes au modèle étudié de différentes manières. Ensuite, nous avons identifié le modèle du système étudié, déterminé les correcteurs correspondants et testé ces derniers sur notre banc d’essai pour valider le bon fonctionnement des différentes stratégies de contrôle utilisées tout le long de cette thèse. Enfin, pour préparer un déploiement de ces stratégies en repère tournant (porte-outil de contrôle actif), nous avons modélisé et implémenté les mêmes démarches pour le cas où l’actionnement se situe en repère tournant et concerne deux axes simultanément, situés dans le plan XY du porte-outil. Nous avons d’abord étudié les vibrations transversales d’une poutre en rotation dans le cas général avant de négliger les phénomènes d’inertie et gyroscopique. En effet, on s’intéresse au contrôle actif du fraisage particulièrement dans les applications de finition, là où on utilise des outils longs de faibles diamètres. Les nouvelles expressions des deux fonctions de transfert de notre système usinant ont été déterminées pour obtenir sa représentation d’état, clé du contrôle actif. La projection du processus de coupe sur le repère tournant est indispensable pour effectuer les simulations du fraisage via le porte outil actif. Ce dernier chapitre met en relief les perspectives de cette thèse, à savoir le contrôle actif du fraisage quelque soit le type de l’opération ou du diamètre de l’outil avec un porte outil mécatronique destiné pour ce genre d’opérations. / This thesis deals with the fields of study which are important for our objective (different usual vibration reduction techniques in machining, active control methods) before validating the principle of active control of milling in a fixed reference. We then developed a state space model of an Euler Bernoulli beam excited at one point and corrected in another one by a piezoelectric actuator. This model allowed us to obtain several compensators, according to different control strategies. We then proceeded from an experimental point of view to study a device similar to our need from an actuating point of view and levels of magnitude (mechanical amplification, frequency range, etc.). ). The robust control strategies that we have developed to attenuate the vibratory displacements of this beam have led to conclusive results presented in the same chapter, first in simulation (which allowed us a comparative study), with and without the cutting process and then experimentally. The robustness of these control strategies was studied (in simulation) by adding uncertainties to the model in different ways. Then we have identified the model of the system, calculated the corresponding compensators and tested them on the test bench in order to validate the good functioning of the different control strategies used in this thesis. Finally, in order to use these strategies in rotating reference (active control tool holder), we have modeled and implemented the same steps for the case where the actuation is located in rotating reference and concerns two axes simultaneously, located in the XY plane of the tool holder. We first studied the transverse vibrations of a rotating beam in the general case before neglecting the inertia and gyroscopic phenomena. Actually, we are interested in the active control of milling, particularly in finishing applications, where long tools of small diameters are used. The new expressions of the two transfer functions of the system have been determined to obtain its state space representation, key of the active control. Projection of the cutting process on the rotating reference is essential to perform milling simulations with the active tool holder. This last chapter highlights the prospects of this thesis,that is the active control of the milling for all kinds of milling operations as well as for different tools with a mechatronic tool holder aimed for this kind of operation.
|
39 |
Fiabilité et optimisation des structures mécaniques à paramètres incertains : application aux cartes électroniques / Reliability and optimization of mechanical strucures in uncertain parameters : application to electronic cardsAssif, Safa 25 October 2013 (has links)
L'objectif principal de cette thèse est l'étude de la fiabilité des cartes électroniques. Ces cartes sont utilisées dans plusieurs domaines, tels que l’industrie automobile, l’aéronautique, les télécommunications, le secteur médical, ..., etc. Elles assurent toutes les fonctions nécessaires au bon fonctionnement d’un système électronique. Les cartes électroniques subissent diverses sollicitations (mécaniques, électriques et thermiques) durant la manipulation et la mise en service. Ces sollicitations sont dues aux chutes, aux vibrations et aux variations de température. Elles peuvent causer la rupture des joints de brasage des composants électroniques. Cette rupture entraine la défaillance du système électronique complet. Les objectifs de ce travail sont: - Développer un modèle numérique pour la simulation du drop-test d’une carte électronique ; - Prédire la durée de vie en fatigue des joints de brasure en tenant compte des incertitudes des diverses variables ; - Développer une méthode d’optimisation fiabiliste pour déterminer la géométrie optimale qui assure un niveau cible de fiabilité d’une carte électronique ; - Application d’une nouvelle méthode hybride d’optimisation pour déterminer la géométrie optimale d’une carte électronique et d’un joint de brasure. Cette thèse a donné lieu à deux publications dans une revue indexée, et deux projets de publication et quatre communications dans des manifestations internationales. / The main objective of this thesis is to study the electronics’ cards reliability. These cards are used in many fields, such as automotive, aerospace, telecommunications, medical. They provide all necessary electronic functions for well functioning of an electronic system. Electronic cards are undergoing various extreme stresses (mechanical, electrical, and thermal) when handling and commissioning. These stresses are due to drops, vibration and temperature variations. They may cause solder joints failures’ of electronic components. This may causes the failure of the entire electronic system. The objectives of this work are: To develop a numerical model to simulate the drop test of an electronic card; To predict the fatigue life of solder joints in uncertain environment of the variables; To develop a reliability-optimization method to determine the optimal geometry providing a targeted reliability level of an electronic card; To apply a new hybrid optimization method in order to determine the optimal geometry both of an electronic card and a solder joint.
|
40 |
Phenomenology of the Higgs at the hadron colliders : from the Standard Model to Supersymmetry / Phénoménologie du Higgs auprès des collisionneurs hadroniques : du Modèle Standard à la SupersymétrieBaglio, Julien 10 October 2011 (has links)
Cette thèse, conduite dans le contexte de la recherche du boson de Higgs, dernière pièce manquante du mécanisme de brisure de la symétrie électrofaible et qui est une des plus importantes recherches auprès des collisionneurs hadroniques actuels, traite de la phénoménologie de ce boson à la fois dans le Modèle Standard (SM) et dans son extension supersymétrique minimale (MSSM). Après un résumé de ce qui constitue le Modèle Standard dans une première partie, nous présenterons nos prédictions pour la section efficace inclusive de production du boson de Higgs dans ses principaux canaux de production auprès des deux collisionneurs hadroniques actuels que sont le Tevatron au Fermilab et le grand collisionneur de hadrons (LHC) au CERN, en commençant par le cas du Modèle Standard. Le principal résultat présenté est l'étude la plus exhaustive possible des différentes sources d'incertitudes théoriques qui pèsent sur le calcul : les incertitudes d'échelles vues comme une mesure de notre ignorance des termes d'ordre supérieur dans un calcul perturbatif à un ordre donné, les incertitudes reliées aux fonctions de distribution de partons dans le proton/l'anti--proton (PDF) ainsi que les incertitudes reliées à la valeur de la constante de couplage fort, et enfin les incertitudes provenant de l'utilisation d'une théorie effective qui simplifie le calcul des ordres supérieurs dans la section efficace de production. Dans un second temps nous étudierons les rapports de branchement de la désintégration du boson de Higgs en donnant ici aussi les incertitudes théoriques qui pèsent sur le calcul. Nous poursuivrons par la combinaison des sections efficaces de production avec le calcul portant sur la désintégration du boson de Higgs, pour un canal spécifique, montrant quelles en sont les conséquences intéressantes sur l'incertitude théorique totale. Ceci nous amènera à un résultat significatif de la thèse qui est la comparaison avec l'expérience et notamment les résultats des recherches du boson de Higgs au Tevatron. Nous irons ensuite au-delà du Modèle Standard dans une troisième partie où nous donnerons quelques ingrédients sur la supersymétrie et sa mise en application dans le MSSM où nous avons cinq bosons de Higgs, puis nous aborderons leur production et désintégration en se focalisant sur les deux canaux de production principaux par fusion de gluon et fusion de quarks $b$. Nous présenterons les résultats significatifs quant à la comparaison avec aussi bien le Tevatron que les résultats très récents d'ATLAS et CMS au LHC qui nous permettront d'analyser l'impact de ces incertitudes sur l'espace des paramètres du MSSM, sans oublier de mentionner quelques bruits de fond du signal des bosons de Higgs. Tout ceci va nous permettre de mettre en avant le deuxième résultat très important de la thèse, ouvrant une nouvelle voie de recherche pour le boson de Higgs standard au LHC. La dernière partie sera consacrée aux perspectives de ce travail et notamment donnera quelques résultats préliminaires dans le cadre d'une étude exclusive, d'un intérêt primordial pour les expérimentateurs. / This thesis has been conducted in the context of one of the utmost important searches at current hadron colliders, that is the search for the Higgs boson, the remnant of the electroweak symmetry breaking. We wish to study the phenomenology of the Higgs boson in both the Standard Model (SM) framework and its minimal Supersymmetric extension (MSSM). After a review of the Standard Model in a first part and of the key reasons and ingredients for the supersymmetry in general and the MSSM in particular in a third part, we will present the calculation of the inclusive production cross sections of the Higgs boson in the main channels at the two current hadron colliders that are the Fermilab Tevatron collider and the CERN Large Hadron Collider (LHC), starting by the SM case in the second part and presenting the MSSM results, where we have five Higgs bosons and focusing on the two main production channels that are the gluon gluon fusion and the bottom quarks fusion, in the fourth part. The main output of this calculation is the extensive study of the various theoretical uncertainties that affect the predictions: the scale uncertainties which probe our ignorance of the higher–order terms in a fixed order perturbative calculation, the parton distribution functions (PDF) uncertainties and its related uncertainties from the value of the strong coupling constant, and the uncertainties coming from the use of an effective field theory to simplify the hard calculation. We then move on to the study of the Higgs decay branching ratios which are also affected by diverse uncertainties. We will present the combination of the production cross sections and decay branching fractions in some specific cases which will show interesting consequences on the total theoretical uncertainties. We move on to present the results confronted to experiments and show that the theoretical uncertainties have a significant impact on the inferred limits either in the SM search for the Higgs boson or on the MSSM parameter space, including some assessments about SM backgrounds to the Higgs production and how they are affected by theoretical uncertainties. One significant result will also come out of the MSSM analysis and open a novel strategy search for the Standard Higgs boson at the LHC. We finally present in the last part some preliminary results of this study in the case of exclusive production which is of utmost interest for the experimentalists.
|
Page generated in 0.0492 seconds