• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 68
  • 13
  • 9
  • 6
  • 5
  • 1
  • 1
  • Tagged with
  • 115
  • 88
  • 35
  • 31
  • 26
  • 26
  • 25
  • 18
  • 17
  • 16
  • 15
  • 13
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Precise Analysis of Private And Shared Caches for Tight WCET Estimates

Nagar, Kartik January 2016 (has links) (PDF)
Worst Case Execution Time (WCET) is an important metric for programs running on real-time systems, and finding precise estimates of a program’s WCET is crucial to avoid over-allocation and wastage of hardware resources and to improve the schedulability of task sets. Hardware Caches have a major impact on a program’s execution time, and accurate estimation of a program’s cache behavior generally leads to significant reduction of its estimated WCET. However, the cache behavior of an access cannot be determined in isolation, since it depends on the access history, and in multi-path programs, the sequence of accesses made to the cache is not fixed. Hence, the same access can exhibit different cache behavior in different execution instances. This issue is further exacerbated in shared caches in a multi-core architecture, where interfering accesses from co-running programs on other cores can arrive at any time and modify the cache state. Further, cache analysis aimed towards WCET estimation should be provably safe, in that the estimated WCET should always exceed the actual execution time across all execution instances. Faced with such contradicting requirements, previous approaches to cache analysis try to find memory accesses in a program which are guaranteed to hit the cache, irrespective of the program input, or the interferences from other co-running programs in case of a shared cache. To do so, they find the worst-case cache behavior for every individual memory access, analyzing the program (and interferences to a shared cache) to find whether there are execution instances where an access can super a cache miss. However, this approach loses out in making more precise predictions of private cache behavior which can be safely used for WCET estimation, and is significantly imprecise for shared cache analysis, where it is often impossible to guarantee that an access always hits the cache. In this work, we take a fundamentally different approach to cache analysis, by (1) trying to find worst-case behavior of groups of cache accesses, and (2) trying to find the exact cache behavior in the worst-case program execution instance, which is the execution instance with the maximum execution time. For shared caches, we propose the Worst Case Interference Placement (WCIP) technique, which finds the worst-case timing of interfering accesses that would cause the maximum number of cache misses on the worst case execution path of the program. We first use Integer Linear Programming (ILP) to find an exact solution to the WCIP problem. However, this approach does not scale well for large programs, and so we investigate the WCIP problem in detail and prove that it is NP-Hard. In the process, we discover that the source of hardness of the WCIP problem lies in finding the worst case execution path which would exhibit the maximum execution time in the presence of interferences. We use this observation to propose an approximate algorithm for performing WCIP, which bypasses the hard problem of finding the worst case execution path by simply assuming that all cache accesses made by the program occur on a single path. This allows us to use a simple greedy algorithm to distribute the interfering accesses by choosing those cache accesses which could be most affected by interferences. The greedy algorithm also guarantees that the increase in WCET due to interferences is linear in the number of interferences. Experimentally, we show that WCIP provides substantial precision improvement in the final WCET over previous approaches to shared cache analysis, and the approximate algorithm almost matches the precision of the ILP-based approach, while being considerably faster. For private caches, we discover multiple scenarios where hit-miss predictions made by traditional Abstract Interpretation-based approaches are not sufficient to fully capture cache behavior for WCET estimation. We introduce the concept of cache miss paths, which are abstractions of program path along which an access can super a cache miss. We propose an ILP-based approach which uses cache miss paths to find the exact cache behavior in the worst-case execution instance of the program. However, the ILP-based approach needs information about the worst-case execution path to predict the cache behavior, and hence it is difficult to integrate it with other micro-architectural analysis. We then show that most of the precision improvement of the ILP-based approach can be recovered without any knowledge of the worst-case execution path, by a careful analysis of the cache miss paths themselves. In particular, we can use cache miss paths to find the worst-case behavior of groups of cache accesses. Further, we can find upper bounds on the maximum number of times that cache accesses inside loops can exhibit worst-case behavior. This results in a scalable, precise method for performing private cache analysis which can be easily integrated with other micro-architectural analysis.
42

Utveckling av arbetsprocess för effektivare produktutveckling : Tillämpad på standardisering av helautomatiskt snabbfäste till hjullastare

Clark, Eric, Olsson, David January 2021 (has links)
För att vara konkurrenskraftiga på marknaden tvingas företag utveckla nya strategier för produktutveckling, som kan anpassas till allt högre kundbehov. Syftet med detta arbete var att effektivisera produktutvecklingen för företag inom tillverkningsindustrin. Målet var att utveckla en effektiv arbetsprocess för produktutveckling samt att tillämpa processen på ett projekt, där ett helautomatiskt snabbfäste till mindre hjullastare skulle standardiseras till företaget OilQuick.Examensarbetet delades in i två delar. Den första delen behandlar skapandet av arbetsprocessen. Denna grundas i befintlig forskning och identifierades genom en litteraturstudie. Den andra delen behandlar tillämpning av arbetsprocessen på standardiseringen av snabbfästet till hjullastare. Data och kunskap till arbetet samlades in genom marknadsundersökningar, studiebesök, möten med kunder och Reverse Engineering. Kundkraven rangordnades med Best-worst metoden och översattes till tekniska produktegenskaper med Kvalitetshuset.Tre koncept genererades i Autodesk Inventor utifrån kraven och produktegenskaperna, samt datan som framkom av Reverse Engineering och marknadsundersökningen. Enligt kundkravet behövde det färdiga snabbfästet passa minst fyra olika hjullastarmodeller i viktklassen fem till åtta ton. Detta medförde att alla tre koncepten modulbaserades, både för att underlätta eventuella modifikationer mellan hjullastarmodellerna och för att minimera antalet detaljer som behövde modifieras. Alla tre konceptens redskapsgrindar hade standardiserade mått som passade de undersökta hjullastarmodeller. Det koncept som bäst uppfyllde kundkraven utvärderades med Fuzzy TOPSIS metoden. Det slutgiltiga konceptet justerades utifrån kundens önskemål innan det färdiga resultatet presenterades.Resultatet visade att arbetsprocessen var effektiv; Reverse Engineering och samarbetet med kunden gav en inblick i problematiken och förståelse för produkten, Best-worst metoden gjorde att rangordningen av kraven gick snabbt utan att minska på pålitligheten och Fuzzy TOPSIS metoden effektiviserade konceptvalet samtidigt som alla inblandades åsikter blev hörda. För att effektivisera produktutvecklingen ytterligare bör fler metoder undersökas alternativt utvecklas.Konceptförslagen som konstruerades påvisade att det är möjligt att standardisera gränssnittet mellan hjullastare och redskap. Genom modulärkonstruktion kunde snabbfästet anpassas till fyra olika hjullastarmodeller. Innan vidareutveckling av snabbfästet bör fler hjullastare mätas upp och studeras. / In order to be competitive in the market, companies are forced to develop new strategies that can be adapted to ever-increasing customer requirements. The purpose of the work was to streamline product development for companies in the manufacturing industry. The objective was to develop an efficient work process for product development and apply the process to a project where a standardized fully automatic quick coupler for smaller wheel loaders would be developed for the company OilQuick.The thesis was divided into two parts. The first part deals with the development of a work process based on existing research. The second part deals with the application of the work process. Data for the application was collected through market research, meetings with customers and Reverse Engineering. Customer requirements were ranked using the Best-worst method and translated into technical specifications using the method QFD (House of Quality).Three concepts were generated based on the resulting requirements from the QFD. According to the customer requirements, the quick coupler had to be compliant with at least four different wheel loader brands in the weight span from five to eight tons. This meant that all three concepts were based on modular design, both to facilitate modifications between the wheel loader brads and to minimize the number of details that needed to be modified. All three of the concept's attachment brackets had standardized dimensions that fitted all examined wheel loader models. To determine which concept that best met the customer requirements the Fuzzy TOPSIS method was used. The final concept was adjusted based on OilQuick's requirements before the finished result was presented.The results showed that the work process was efficient and reliable and that the developed concept proposal indicates that it is possible for the company to standardize the interface between wheel loaders and attachments. With modular design the quick coupler can be adapted to the wheel loader brands that were studied. To ensure that the quick coupler can be applied to a larger variety of wheel loaders, it is necessary to examine and take measurements of more wheel loader brands before further development.
43

From Theory to Implementation of Embedded Control Applications : A Case Study

Fize, Florian January 2016 (has links)
Control applications are used in almost all scientific domains and are subject to timing constraints. Moreover, different applications can run on the same platform which leads to even more complex timing behaviors. However, some of the timing issues are not always considered in the implementation of such applications, and this can make the system fail. In this thesis, the timing issues are considered, i.e., the problem of non-constant delay in the control of an inverted pendulum with a real-time kernel running on an ATmega328p micro-controller. The study shows that control performance is affected by this problem. In addition, the thesis, reports the adaptation of an existing real-time kernel based on an EDF (Earliest Deadline First) scheduling policy, to the architecture of the ATmega328p. Moreover, the new approach of a server-based kernel is implemented in this thesis, still on the same Atmel micro-controller.
44

Analyse pire cas de flux hétérogènes dans un réseau embarqué avion / Heterogeneous flows worst case analysis in avionics embedded networks

Bauer, Henri 04 October 2011 (has links)
La certification des réseaux avioniques requiert une maîtrise des délais de transmission des données. Cepednant, le multiplexage et le partage des ressource de communications dans des réseaux tels que l'AFDX (Avionics Full Duplex Switched Ethernet) rendent difficile le calcul d'un délai de bout en bout pire cas pour chaque flux. Des outils comme le calcul réseau fournissent une borne supérieure (pessimiste) de ce délai pire cas. Les besoins de communication des avions civils modernes ne cessent d'augmenter et un nombre croissant de flux aux contraintes et aux caractéristiques différentes doivent partager les ressources existantes. Le réseau AFDX actuel ne permet pas de différentier plusieurs classes de trafic : les messages sont traités dans les files des commutateurs selon leur ordre d'arrivée (politique de service FIFO). L'objet de cette thèse est de montrer qu'il est possible de calculer des bornes pire cas des délais de bout en bout avec des politiques de service plus évoluées, à base de priorités statiques (Priority Queueing) ou à répartition équitable de service (Fair Queueing). Nous montrons comment l'approche par trajectoires, issue de la théorie de l'ordonnancement dans des systèmes asynchrones distribués peut s'appliquer au domaine de l'AFDX actuel et futur (intégration de politiques de service plus évoluées permettant la différentiation de flux). Nous comparons les performances de cette approche avec les outils de référence lorsque cela est possible et étudions le pessimisme des bornes ainsi obtenues. / The certification process for avionics network requires guaranties on data transmission delays. However, calculating the worst case delay can be complex in the case of industrial AFDX (Avionics Full Duplex Switched Ethernet) networks. Tools such as Network Calculus provide a pessimistic upper bound of this worst case delay. Communication needs of modern commercial aircraft are expanding and a growing number of flows with various constraints and characteristics must share already existing resources. Currently deployed AFDX networks do not differentiate multiple classes of traffic: messages are processed in their arrival order in the output ports of the switches (FIFO servicing policy). The purpose of this thesis is to show that it is possible to provide upper bounds of end to end transmission delays in networks that implement more advanced servicing policies, based on static priorities (Priority Queuing) or on fairness (Fair Queuing). We show how the trajectory approach, based on scheduling theory in asynchronous distributed systems can be applied to current and future AFDX networks (supporting advanced servicing policies with flow differentiation capabilities). We compare the performance of this approach with the reference tools whenever it is possible and we study the pessimism of the computed upper bounds.
45

Certified Compilation and Worst-Case Execution Time Estimation / Compilation formellement vérifiée et estimation du pire temps d'éxécution

Maroneze, André Oliveira 17 June 2014 (has links)
Les systèmes informatiques critiques - tels que les commandes de vol électroniques et le contrôle des centrales nucléaires - doivent répondre à des exigences strictes en termes de sûreté de fonctionnement. Nous nous intéressons ici à l'application de méthodes formelles - ancrées sur de solides bases mathématiques - pour la vérification du comportement des logiciels critiques. Plus particulièrement, nous spécifions formellement nos algorithmes et nous les prouvons corrects, à l'aide de l'assistant à la preuve Coq - un logiciel qui vérifie mécaniquement la correction des preuves effectuées et qui apporte un degré de confiance très élevé. Nous appliquons ici des méthodes formelles à l'estimation du Temps d'Exécution au Pire Cas (plus connu par son abréviation en anglais, WCET) de programmes C. Le WCET est une propriété importante pour la sûreté de fonctionnement des systèmes critiques, mais son estimation exige des analyses sophistiquées. Pour garantir l'absence d'erreurs lors de ces analyses, nous avons formellement vérifié une méthode d'estimation du WCET fondée sur la combinaison de deux techniques principales: une estimation de bornes de boucles et une estimation du WCET via la méthode IPET (Implicit Path Enumeration Technique). L'estimation de bornes de boucles est elle-même décomposée en trois étapes : un découpage de programmes, une analyse de valeurs opérant par interprétation abstraite, et une méthode de calcul de bornes. Chacune de ces étapes est formellement vérifiée dans un chapitre qui lui est dédiée. Le développement a été intégré au compilateur C formellement vérifié CompCert. Nous prouvons que le résultat de l'estimation est correct et nous évaluons ses performances dans des ensembles de benchmarks de référence dans le domaine. Les contributions de cette thèse incluent la formalisation des techniques utilisées pour estimer le WCET, l'outil d'estimation lui-même (obtenu à partir de la formalisation), et l'évaluation expérimentale des résultats. Nous concluons que le développement fondé sur les méthodes formelles permet d'obtenir des résultats intéressants en termes de précision, mais il exige des précautions particulières pour s'assurer que l'effort de preuve reste maîtrisable. Le développement en parallèle des spécifications et des preuves est essentiel à cette fin. Les travaux futurs incluent la formalisation de modèles de coût matériel, ainsi que le développement d'analyses plus sophistiquées pour augmenter la précision du WCET estimé. / Safety-critical systems - such as electronic flight control systems and nuclear reactor controls - must satisfy strict safety requirements. We are interested here in the application of formal methods - built upon solid mathematical bases - to verify the behavior of safety-critical systems. More specifically, we formally specify our algorithms and then prove them correct using the Coq proof assistant - a program capable of mechanically checking the correctness of our proofs, providing a very high degree of confidence. In this thesis, we apply formal methods to obtain safe Worst-Case Execution Time (WCET) estimations for C programs. The WCET is an important property related to the safety of critical systems, but its estimation requires sophisticated techniques. To guarantee the absence of errors during WCET estimation, we have formally verified a WCET estimation technique based on the combination of two main methods: a loop bound estimation and the WCET estimation via the Implicit Path Enumeration Technique (IPET). The loop bound estimation itself is decomposed in three steps: a program slicing, a value analysis based on abstract interpretation, and a loop bound calculation stage. Each stage has a chapter dedicated to its formal verification. The entire development has been integrated into the formally verified C compiler CompCert. We prove that the final estimation is correct and we evaluate its performances on a set of reference benchmarks. The contributions of this thesis include (a) the formalization of the techniques used to estimate the WCET, (b) the estimation tool itself (obtained from the formalization), and (c) the experimental evaluation. We conclude that our formally verified development obtains interesting results in terms of precision, but it requires special precautions to ensure the proof effort remains manageable. The parallel development of specifications and proofs is essential to this end. Future works include the formalization of hardware cost models, as well as the development of more sophisticated analyses to improve the precision of the estimated WCET.
46

Estudo do pior caso na validação de limpeza de equipamentos de produção de radiofármacos de reagentes liofilizados. Validação de metodologia de carbono orgânico total / Worst-case study for cleaning validation of equipments in the radiopharmaceutical production of lyophilized reagents. Metodology validation of total organic carbon.

Porto, Luciana Valeria Ferrari Machado 18 December 2015 (has links)
Os radiofármacos são definidos como preparações farmacêuticas contendo um radionuclídeo em sua composição, são administrados intravenosamente em sua maioria, e, portanto, o cumprimento dos princípios de Boas Práticas de Fabricação (BPF) é essencial e indispensável à tais produtos. A validação de limpeza é um requisito das BPF e consiste na evidência documentada que demonstra que os procedimentos de limpeza removem os resíduos a níveis pré-determinados de aceitação, garantindo que não haja contaminação cruzada. Uma simplificação da validação dos processos de limpeza é admitida, e consiste na escolha de um produto, denominado de \"pior caso\" ou worst case, para representar a limpeza de todos os equipamentos da mesma linha de produção. Uma das etapas da validação de limpeza é o estabelecimento e validação do método analítico para quantificação do resíduo. O objetivo deste estudo foi estabelecer o pior caso para a validação de limpeza dos equipamentos de produção de reagentes liofilizados-RL para marcação com 99mTc, avaliar a utilização do teor de carbono orgânico total (COT) como indicador de limpeza dos equipamentos utilizados na fabricação dos RL, validar o método para determinação de CONP (carbono orgânico não purgável/volátil) e realizar testes de recuperação com o produto escolhido como pior caso. A escolha do produto pior caso baseou-se no cálculo de um índice denominado \"índice para pior caso - Worst Case Index (WCI)\", utilizando informações de solubilidade dos fármacos, dificuldade de limpeza dos equipamentos e taxa de ocupação dos produtos na linha de produção. O produto indicado como pior caso entre os RL foi o MIBI-TEC. Os ensaios de validação do método foram realizados utilizando-se um analisador de carbono modelo TOC-Vwp acoplado a um amostrador automático modelo ASI-V, ambos da marca Shimadzu&reg e controlados por software TOC Control-V Shimadzu&reg. Foi utilizado o método direto de quantificação do CONP. Os parâmetros avaliados na validação do método foram: conformidade do sistema, robustez, linearidade, limites de detecção (LD) e de quantificação (LQ), precisão (repetibilidade e precisão intermediária), e exatidão (recuperação) e foram definidos como: 4% acidificante, 2,5 mL de oxidante, tempo de integração da curva de 4,5 minutos, tempo de sparge de 3,0 minutos e linearidade na faixa de 40-1000 μgL-1, com coeficiente de correlação (r) e soma residual dos mínimos quadrados (r2) > 0,99 respectivamente. LD e LQ para CONP foram 14,25 ppb e 47,52 ppb, respectivamente, repetibilidade entre 0,11 4,47%; a precisão intermediária entre 0,59 a 3,80% e exatidão entre 97,05 - 102,90%. A curva analítica para Mibi mostrou-se linear na faixa de 100-800 μgL-1, com r e r2 > 0,99, apresentando parâmetros similares aos das curvas analíticas de CONP. Os resultados obtidos neste estudo demonstraram que a abordagem do pior caso para validação de limpeza é um meio simples e eficaz para diminuir a complexidade e morosidade do processo de validação, além de proporcionar uma redução nos custos envolvidos nestas atividades. Todos os resultados obtidos nos ensaios de validação de método CONP atenderam as exigências e especificações preconizadas pela norma RE 899/2003 da ANVISA para considerar a metodologia validada. / Radiopharmaceuticals are defined as pharmaceutical preparations containing a radionuclide in their composition, mostly intravenously administered, and therefore compliance with the principles of Good Manufacturing Practices (GMP) is essential and indispensable. Cleaning validation is a requirement of the current GMP, and consists of documented evidence, which demonstrates that the cleaning procedures are able to remove residues to pre-determined acceptance levels, ensuring that no cross contamination occurs. A simplification of cleaning processes validation is accepted, and consists in choosing a product, called \"worst case\", to represent the cleaning processes of all equipment of the same production area. One of the steps of cleaning validation is the establishment and validation of the analytical method to quantify the residue. The aim of this study was to establish the worst case for cleaning validation of equipment in the radiopharmaceutical production of lyophilized reagent (LR) for labeling with 99mTc, evaluate the use of Total Organic Carbon (TOC) content as indicator of equipment cleaning used in the LR manufacture, validate the method of Non-Purgeable Organic Carbon (NPOC), and perform recovery tests with the product chosen as worst case.Worst case products choice was based on the calculation of an index called \"Worst Case Index\" (WCI), using information about drug solubility, difficulty of cleaning the equipment and occupancy rate of the products in line production. The products indicated as worst case was the LR MIBI-TEC. The method validation assays were performed using carbon analyser model TOC-Vwp coupled to an autosampler model ASI-V, both from Shimadzu&reg, controlled by TOC Control-V software. It was used the direct method for NPOC quantification. The parameters evaluated in the validation method were: system suitability, robustness, linearity, detection limit (DL) and quantification limit (QL), precision (repeatability and intermediate precision), and accuracy (recovery) and they were defined as follows: 4% acidifying reagent, 2.5 ml oxidizing reagent, 4.5 minutes integration curve time, 3 minutes sparge time and linearity in 40-1000 μgL-1 range, with correlation coefficient (r) and residual sum of minimum squares (r2) greater than 0.99 respectively. DL and QL for NPOC were 14.25 ppb e 47.52 ppb respectively, repeatability between 0.11 and 4.47%; the intermediate precision between 0.59 and 3.80% and accuracy between 97.05 and 102.90%. The analytical curve for Mibi was linear in 100-800 μgL-1 range with r and r2 greater than 0.99, presenting similar parameters to NPOC analytical curves. The results obtained in this study demonstrated that the worst-case approach to cleaning validation is a simple and effective way to reduce the complexity and slowness of the validation process, and provide a costs reduction involved in these activities. All results obtained in NPOC method validation assays met the requirements and specifications recommended by the RE 899/2003 Resolution from ANVISA to consider the method validated.
47

Low Complexity Space-Time coding for MIMO systems. / Codes Espace-Temps à Faible Complexité pour Systèmes MIMO

Ismail, Amr 24 November 2011 (has links)
Les dernières années ont témoigné une augmentation spectaculaire de la demande des communications sans-fil à taux élevé. Afin de répondre à ces nouvelles exigences, le recours aux techniques Multiple-Input Multiple-Output (MIMO) était inévitable, car ils sont capables d’assurer une transmission fiable des données à haut débit sans l’allocation de bande passante supplémentaire. Dans le cas où l’émetteur ne dispose pas d’information sur l’état du canal, les techniques de codage spatio-temporel se sont avérées d’exploiter efficacement les degrés de liberté du canal MIMO tout en profitant du gain de diversité maximal. D’autre part, généralement la complexité de décodage ML des codes espace-temps augmente de manière exponentielle avec le taux ce qui impose un défi important à leur incorporation dans les normes récentes de communications. Reconnaissant l’importance du critère de faible complexité dans la conception des codes espace-temps, nous nous concentrons dans cette thèse sur les codes espace-temps en bloc où la matrice du code peut être exprimée comme une combinaison linéaire des symboles réels transmis et nous proposons des nouveaux codes qui sont décodables avec une complexité inférieure à celle de leurs rivaux dans la littérature tout en fournissant des meilleurs performances ou des performances légèrement inférieures. / The last few years witnessed a dramatic increase in the demand on high-rate reliable wireless communications. In order to meet these new requirements, resorting to Multiple-Input Multiple-Output (MIMO) techniques was inevitable as they may offer high-rate reliable wireless communications without any additional bandwidth. In the case where the transmitter does not have any prior knowledge about the channel state information, space-time coding techniques have proved to efficiently exploit the MIMO channel degrees of freedom while taking advantage of the maximum diversity gain. On the other hand, the ML decoding complexity of Space-Time Codes (STCs) generally increases exponentially with the rate which imposes an important challenge to their incorporation in recent communications standards. Recognizing the importance of the low-complexity criterion in the STC design for practical considerations, this thesis focuses on the design of new low-complexity Space-Time Block Codes (STBCs) where the transmitted code matrix can be expressed as a weighted linear combination of information symbols and we propose new codes that are decoded with a lower complexity than that of their rivals in the literature while providing better or slightly lower performance.
48

Possibilistic Interpretation Of Mistuning In Bladed Disks By Fuzzy Algebra

Karatas, Hamit Caglar 01 October 2012 (has links) (PDF)
ABSTRACT POSSIBILISTIC INTERPRETATION OF MISTUNING IN BLADED DISKS BY FUZZY ALGEBRA Karatas, Hamit &Ccedil / aglar M.S., Department of Mechanical Engineering Supervisor: Prof. Dr. H. Nevzat &Ouml / zg&uuml / ven Co-supervisor: Asst. Prof. Dr. Ender Cigeroglu September 2012, 103 pages This study aims to define the possibilistic interpretation of mistuning and examine the way of determining the worst case situations and assessing reliability value to that case by using possibilistic methods. Furthermore, in this study, benefits of using possibilistic interpretation of mistuning in comparison to probabilistic interpretation of mistuning are investigated. For the possibilistic analysis of mistuned structures, uncertain mistuning parameters are modeled as fuzzy variables possessing possibility distributions. In this study, alpha-cut representations of fuzzy numbers are used which makes fuzzy variables to be represented by interval numbers at each and every confidence level. The solution of fuzzy equations of motion is governed by fuzzy algebra methods. The bounds of the solution of the fuzzy equation of motion, i.e. fuzzy vibration responses of the mistuned structure, are determined by the extension principle of fuzzy functions. The performance of the method for possibilistic interpretation of mistuning is investigated by comparing it to the probabilistic methods both computational and accuracy wise. For the comparison study, two different optimization tools &ndash / genetic algorithm as the global optimization tool and constrained nonlinear minimization method as the gradient based optimization tool- are utilized in possibilistic analysis and they are compared to solutions of probabilistic methods resulted from Monte-Carlo method. The performances of all of the methods are tested on both a cyclically symmetric lumped parameter model and a realistic reduced order finite element model.
49

Possibilistic Interpretation Of Mistuning In Bladed Disks By Fuzzy Algebra

Karatas, Hamit Caglar 01 October 2012 (has links) (PDF)
This study aims to define the possibilistic interpretation of mistuning and examine the way of determining the worst case situations and assessing reliability value to that case by using possibilistic methods. Furthermore, in this study, benefits of using possibilistic interpretation of mistuning in comparison to probabilistic interpretation of mistuning are investigated. For the possibilistic analysis of mistuned structures, uncertain mistuning parameters are modeled as fuzzy variables possessing possibility distributions. In this study, alpha-cut representations of fuzzy numbers are used which makes fuzzy variables to be represented by interval numbers at each and every confidence level. The solution of fuzzy equations of motion is governed by fuzzy algebra methods. The bounds of the solution of the fuzzy equation of motion, i.e. fuzzy vibration responses of the mistuned structure, are determined by the extension principle of fuzzy functions. The performance of the method for possibilistic interpretation of mistuning is investigated by comparing it to the probabilistic methods both computational and accuracy wise. For the comparison study, two different optimization tools &ndash / genetic algorithm as the global optimization tool and constrained nonlinear minimization method as the gradient based optimization tool- are utilized in possibilistic analysis and they are compared to solutions of probabilistic methods resulted from Monte-Carlo method. The performances of all of the methods are tested on both a cyclically symmetric lumped parameter model and a realistic reduced order finite element model.
50

Contributions to Imputation Methods Based on Ranks and to Treatment Selection Methods in Personalized Medicine

Matsouaka, Roland Albert January 2012 (has links)
The chapters of this thesis focus two different issues that arise in clinical trials and propose novel methods to address them. The first issue arises in the analysis of data with non-ignorable missing observations. The second issue concerns the development of methods that provide physicians better tools to understand and treat diseases efficiently by using each patient's characteristics and personal biomedical profile. Inherent to most clinical trials is the issue of missing data, specially those that arise when patients drop out the study without further measurements. Proper handling of missing data is crucial in all statistical analyses because disregarding missing observations can lead to biased results. In the first two chapters of this thesis, we deal with the "worst-rank score" missing data imputation technique in pretest-posttest clinical trials. Subjects are randomly assigned to two treatments and the response is recorded at baseline prior to treatment (pretest response), and after a pre-specified follow-up period (posttest response). The treatment effect is then assessed on the change in response from baseline to the end of follow-up time. Subjects with missing response at the end of follow-up are assign values that are worse than any observed response (worst-rank score). Data analysis is then conducted using Wilcoxon-Mann-Whitney test. In the first chapter, we derive explicit closed-form formulas for power and sample size calculations using both tied and untied worst-rank score imputation, where the worst-rank scores are either a fixed value (tied score) or depend on the time of withdrawal (untied score). We use simulations to demonstrate the validity of these formulas. In addition, we examine and compare four different simplification approaches to estimate sample sizes. These approaches depend on whether data from the literature or a pilot study are available. In second chapter, we introduce the weighted Wilcoxon-Mann-Whitney test on un-tied worst-rank score (composite) outcome. First, we demonstrate that the weighted test is exactly the ordinary Wilcoxon-Mann-Whitney test when the weights are equal. Then, we derive optimal weights that maximize the power of the corresponding weighted Wilcoxon-Mann-Whitney test. We prove, using simulations, that the weighted test is more powerful than the ordinary test. Furthermore, we propose two different step-wise procedures to analyze data using the weighted test and assess their performances through simulation studies. Finally, we illustrate the new approach using data from a recent randomized clinical trial of normobaric oxygen therapy on patients with acute ischemic stroke. The third and last chapter of this thesis concerns the development of robust methods for treatment groups identification in personalized medicine. As we know, physicians often have to use a trial-and-error approach to find the most effective medication for their patients. Personalized medicine methods aim at tailoring strategies for disease prevention, detection or treatment by using each individual subject's personal characteristics and medical profile. This would result to (1) better diagnosis and earlier interventions, (2) maximum therapeutic benefits and reduced adverse events, (3) more effective therapy, and (4) more efficient drug development. Novel methods have been proposed to identify subgroup of patients who would benefit from a given treatment. In the last chapter of this thesis, we develop a robust method for treatment assignment for future patients based on the expected total outcome. In addition, we provide a method to assess the incremental value of new covariate(s) in improving treatment assignment. We evaluate the accuracy of our methods through simulation studies and illustrate them with two examples using data from two HIV/AIDS clinical trials.

Page generated in 0.0547 seconds