• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 264
  • 131
  • 41
  • 20
  • 16
  • 15
  • 11
  • 10
  • 8
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 622
  • 83
  • 79
  • 64
  • 62
  • 57
  • 55
  • 48
  • 46
  • 45
  • 40
  • 39
  • 39
  • 38
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Funciones de fragilidad analíticas mediante análisis dinámico incremental para estimar la vulnerabilidad sísmica del pabellón frontal del Hospital Casimiro Ulloa / Analytical Fragility Functions using Incremental Dynamic Analysis to Evaluate the Seismic Vulnerability of the Frontal Block of Casimiro Ulloa Hospital

Aguilar Gonzales, Ashily Gabriel, Gonzales Mejia, George Hamiltong 25 October 2020 (has links)
El Perú, debido a su ubicación geográfica en el Cinturón de Fuego del Pacífico, es un país con alta sismicidad; lo que hace que nuestras edificaciones se encuentren experimentando la ocurrencia de sismos con mucha frecuencia. A lo largo de los años, dichos eventos no habrían liberado la energía sísmica acumulada en su totalidad, por lo que el país se encuentra en un silencio sísmico, a la espera de un sismo de gran magnitud. Asimismo, existen muchas edificaciones esenciales, tales como hospitales, que fueron construidos antes de la emisión de la primera norma de Diseño Sismorresistente en 1970; siendo diseñadas posiblemente solo considerando cargas de gravedad. Es por esta razón que se tiene la incertidumbre de cuán preparadas están dichas edificaciones esenciales ante la ocurrencia de próximos eventos sísmicos de gran magnitud Este estudio presenta una serie de metodologías de tratamiento de registros sísmicos, modelamiento no lineal de una estructura de albañilería artesanal y su calibración con resultados experimentales, aplicación del análisis dinámico incremental (IDA) y proceso estadístico de los resultados. Todo ello para generar funciones de fragilidad analíticas que permitan estimar la probabilidad de exceder cada estado de daño para una determinada demanda sísmica. Los resultados muestran que las funciones de fragilidad analíticas son una herramienta útil para estimar la vulnerabilidad sísmica, puesto que se obtuvo altas probabilidades de colapso en ambas direcciones ortogonales. / Peru, due to its geographical location in the Pacific Ring of Fire, is a country with high seismicity; which makes our buildings are experiencing the occurrence of earthquakes very frequently. Over the years, these events would not have released the seismic energy accumulated in its entirety, so that’s why the country is waiting for a big earthquake. Also, there are many essential buildings, such as hospitals, that were built before the issuance of the first Seismic-Resistant Design standard in 1970; being designed possibly considering gravity loads. It’s for this reason that exist an uncertainty of how prepared our essential buildings are in the face of the occurrence of earthquakes of great magnitude. This study presents a series of methodologies for the treatment of seismic records, nonlinear structure modeling, application of incremental dynamic analysis (IDA) and statistical process of the results. All this to generate analytical fragility functions that allow estimating the probability of exceeding each damage state for a given seismic demand. The results show that analytical fragility functions are a useful tool to estimating the seismic vulnerability, because we obtain high probability of collapse in both orthogonal directions. Also, the results show the needed to reinforce this facilities. / Tesis
332

Cost-Effectiveness Analysis of Anastrozole versus Tamoxifen in Adjuvant Therapy for Early-Stage Breast Cancer – a Health-Economic Analysis Based on the 100-Month Analysis of the ATAC Trial and the German Health System

Lux, Michael P., Wöckel, Achim, Benedict, Agnes, Buchholz, Stefan, Kreif, Noémi, Harbeck, Nadia, Kreienberg, Rolf, Kaufmann, Manfred, Beckmann, Matthias W., Jonat, Walter, Hadji, Peyman, Distler, Wolfgang, Raab, Guenther, Tesch, Hans, Weyers, Georg, Possinger, Kurt, Schneeweiss, Andreas January 2010 (has links)
Background: In the ‘Arimidex’, Tamoxifen Alone or in Combination (ATAC) trial, the aromatase inhibitor (AI) anastrozole had a ignificantly better efficacy and safety profile than tamoxifen as initial adjuvant therapy for hormone receptor-positive (HR+) early breast cancer (EBC) in postmenopausal patients. To compare the combined long-term clinical and economic benefits, we carried out a cost-effectiveness analysis (CEA) of anastrozole versus tamoxifen based on the data of the 100- month analysis of the ATAC trial from the perspective of the German public health insurance. Patients and Methods: A Markov model with a 25-year time horizon was developed using the 100-month analysis of the ATAC trial as well as data obtained from published literature and expert opinion. Results: Adjuvant treatment of EBC with anastrozole achieved an additional 0.32 quality-adjusted life-years (QALYs) gained per patient compared with tamoxifen, at an additional cost of D 6819 per patient. Thus, the incremental cost effectiveness of anastrozole versus tamoxifen at 25 years was D 21,069 ($ 30,717) per QALY gained. Conclusions: This is the first CEA of an AI that is based on extended follow-up data, taking into account the carryover effect of anastrozole, which maintains the efficacy benefits beyond therapy completion after 5 years. Adjuvant treatment with anastrozole for postmenopausal women with HR+ EBC is a cost-effective alternative to tamoxifen. / Hintergrund: Bei der adjuvanten Therapie von postmenopausalen Patientinnen mit Hormonrezeptor-positivem (HR+) Mammakarzinom belegen die ATAC-100-Monatsdaten (ATAC-Studie: ‘Arimidex’, Tamoxifen Alone or in Combination) einen signifikanten Vorteil von Anastrozol gegenüber Tamoxifen in Bezug auf Rezidivrisiko und Verträglichkeit. Es wurde eine Kosten-Nutzwert-Analyse von Anastrozol im Vergleich zu Tamoxifen aus der Sicht des deutschen Gesundheitssystems durchgeführt. Material und Methoden: Als Berechnungsbasis wurde ein Markov- Modell zur Abschätzung der Kosteneffektivität entwickelt. Der Modellierungszeitraum umfasste 25 Jahre. Die Daten wurden anhand der ATAC-100-Monatsdaten, vorliegender Literatur und durch ein interdisziplinäres Expertenteam ermittelt. Ergebnisse: Eine adjuvante Therapie mit Anastrozol erzielte 0,32 quality-adjusted life-years (QALYs) pro Patientin mehr, verglichen mit einer adjuvanten Tamoxifentherapie. Die zusätzlichen Kosten der Therapie mit Anastrozol lagen bei 6819 D pro Patientin. Im Vergleich mit Tamoxifen erzielte Anastrozol einen ICER (Incremental Cost-Effectiveness Ratio) von 21 069 D (30 717 $)/QALY über den gesamten Modellierungszeitraum. Schlussfolgerung: Diese Kosten- Nutzwert-Analyse eines Aromatasehemmers basiert erstmals auf einer Datenanalyse, die auch das Follow-Up und den sogenannten Carryover- Effekt nach einer abgeschlossenen 5-Jahres-Therapie beinhaltet. Anastrozol ist auch nach dieser Analyse aus der Sicht des deutschen Gesundheitssystems eine kosteneffektive Therapieoption für postmenopausale Patientinnen mit einem HR+ frühen Mammakarzinom. / Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG-geförderten) Allianz- bzw. Nationallizenz frei zugänglich.
333

Improving the Single Event Effect Response of Triple Modular Redundancy on SRAM FPGAs Through Placement and Routing

Cannon, Matthew Joel 01 August 2019 (has links)
Triple modular redundancy (TMR) with repair is commonly used to improve the reliability of systems. TMR is often employed for circuits implemented on field programmable gate arrays (FPGAs) to mitigate the radiation effects of single event upsets (SEUs). This has proven to be an effective technique by improving a circuit's sensitive cross-section by up to 100x. However, testing has shown that the improvement offered by TMR is limited by upsets in single configuration bits that cause TMR to fail.This work proposes a variety of mitigation techniques that improve the effectiveness of TMR on FPGAs. These mitigation techniques can alter the circuit's netlist and how the circuit is placed and routed on the FPGA. TMR with repair showed a neutron cross-section improvement of 100x while the best mitigation technique proposed in this work showed an improvement of 700x.This work demonstrates both some causes behind single bit SEU failures for TMR circuits on FPGAs and mitigation techniques to address these failures. In addition to these findings, this work also shows that the majority of radiation failures in these circuits are caused by multiple cell upsets, laying the path for future work to further enhance the effectiveness of TMR on FPGAs.
334

An examination of analysis and optimization procedures within a PBSD framework

Cott, Andrew January 1900 (has links)
Master of Science / Department of Architectural Engineering and Construction Science / Kimberly W. Kramer / The basic tenets of performance based seismic design (PBSD) are introduced. This includes a description of the underlying philosophy of PBSD, the concept of performance objectives, and a description of hazard levels and performance indicators. After establishing the basis of PBSD, analysis procedures that fit well within the PBSD framework are introduced. These procedures are divided into four basic categories: linear static, linear dynamic, nonlinear static, and nonlinear static. Baseline FEMA requirements are introduced for each category. Each analysis category is then expanded to include a detailed description of and variations on the basic procedure. Finally, optimization procedures that mesh well with a PBSD framework are introduced and described. The optimization discussion focuses first on the solution tools needed to effectively execute a PBSD multi-objective optimization procedure, namely genetic and evolutionary strategies algorithms. Next, multiple options for defining objective functions and constraints are presented to illustrate the versatility of structural optimization. Taken together, this report illustrates the unique aspects of PBSD. As PBSD moves to the forefront of design methodology, the subjects discussed serve to familiarize engineers with the advantages, possibilities, and finer workings of this powerful new design methodology.
335

Towards the elicitation of hidden domain factors from clients and users during the design of software systems

Friendrich, Wernher Rudolph 11 1900 (has links)
This dissertation focuses on how requirements for a new software development system are elicited and what pitfalls could cause a software development project to fail if the said requirements are not captured correctly. A number of existing requirements elicitation methods, namely: JAD (Joint Application Design), RAD (Rapid Application Development), a Formal Specifications Language (Z), Natural Language, UML (Unified Modelling Language) and Prototyping are covered. The aforementioned techniques are then integrated into existing software development life cycle models, such as the Waterfall model, Rapid Prototyping model, Build and Fix model, Spiral model, Incremental model and the V-Process model. Differences in the domains (knowledge and experience of an environment) of a client and that of the software development team are highlighted and this is done diagrammatically using the language of Venn diagrams. The dissertation also refers to a case study highlighting a number of problems during the requirements elicitation process, amongst other the problem of tacit knowledge not surfacing during elicitation. Two new requirements elicitation methodologies are proposed namely: the SRE (Solitary Requirements Elicitation) and the DDI (Developer Domain Interaction) methodology. These two methods could potentially be more time consuming than other existing requirements elicitation methods, but the benefits could outweigh the cost of their implementation, since the new proposed methods have the potential to further facilitate the successful completion of a software development project. Following the introduction of the new requirements elicitation methods, they are then applied to the aforementioned case study and highlight just how the hidden domain of the client may become more visible, because the software development team has gained a deeper understanding of the client’s working environment. They have therefore increased their understanding of how the final product needs to function in order to fulfil the set out requirements correctly. Towards the end of the dissertation a summary and a conclusion as well as future work that could be undertaken in this area are provided. / Computer Science / M. Sc. (Computer Science)
336

Lexical representations in children who stutter: evidence using a gating paradigm

Hudson, Sarah Ann 26 October 2010 (has links)
This thesis investigated lexical representations of children who stutter (CWS) and children who do not stutter (CWNS) using a duration-blocked gating task. This thesis tested the hypothesis that children who stutter have underspecified phonological representations for words, are less sensitive to incremental and segmental information for lexical items, and therefore require more acoustic-phonetic information to activate words in their lexicon. Pilot data collected from fourteen children (ages 5;6 to 10;1): 7 CWS and 7 CWNS matched on age were included in this thesis. Results showed that children in both talker groups required relatively equal amounts of acoustic-phonetic information to identify target words. A regression model revealed that age in months predicted performance on the gating task for CWNS, but that age in months did not predict performance on the gating task for CWS suggesting a difference in the developmental maturity of lexical representations in CWS. Possible conclusions from these pilot data are presented along with recommendations for future research. / text
337

Habitat Suitability Criteria for Fishes of the South Fork of the Shenandoah River and an Investigation into Observer Effects Associated with Two Techniques of Direct Underwater Observation

Ramey, Robert Clayton 29 April 2009 (has links)
This study constructed habitat suitability criteria for fishes of the South Fork of the Shenandoah River, in Virginia. The criteria will be used in an IFIM study to produce estimates of the discharge required by fishes in the South Fork. Chi-square tests were used to evaluate whether criteria described habitat use to a statistically significant degree. Secondly, chi-square tests were used to test transferability. The criteria described the habitat use of seven taxa commonly found in the South Fork to a statistically significant degree. Habitat criteria for two taxa did not describe their habitat use to a statistically significant degree. One set of criteria from the North Fork of the Shenandoah transferred to the fish observed in the South Fork. Secondly, this paper examined observer effects of underwater observation. It was of interest to explore how observer effects influenced habitat suitability criteria.
338

On-demand Development of Statistical Machine Translation Systems / Développement à la demande des systèmes de traduction automatique statistiques

Gong, Li 25 November 2014 (has links)
La traduction automatique statistique produit des résultats qui en font un choix privilégié dans la plupart des scénarios de traduction assistée par ordinateur.Cependant, le développement de ces systèmes de haute performance implique des traitements très coûteux sur des données à grande échelle. De nouvelles données sont continuellement disponibles,alors que les systèmes construits de manière standard sont statiques, ce qui rend l'utilisation de nouvelles données couteuse car les systèmes sont typiquement reconstruits en intégralité.En outre, le processus d'adaptation des systèmes de traduction est généralement fondé sur un corpus de développement et est effectué une fois pour toutes. Dans cette thèse, nous proposons un cadre informatique pour répondre à ces trois problèmes conjointement. Ce cadre permet de développer des systèmes de traduction à la demande avec des mises à jour incrémentales et permet d’adapter les systèmes construits à chaque nouveau texte à traduire.La première contribution importante de cette thèse concerne une nouvelle méthode d'alignement sous-phrastique qui peut aligner des paires de phrases en isolation. Cette propriété permet aux systèmes de traduction de calculer des informations à la demande afin d'intégrer de façon transparente de nouvelles données disponibles sans re-entraînement complet des systèmes.La deuxième contribution importante de cette thèse est de proposer l'intégration de stratégies d'échantillonnage contextuel pour sélectionner des exemples de traduction à partir de corpus à grande échelle sur la base de leur similarité avec le texte à traduire afin d obtenir des tables de traduction adaptées / Statistical Machine Translation (SMT) produces results that make it apreferred choice in most machine-assisted translation scenarios.However,the development of such high-performance systems involves thecostly processing of very large-scale data. New data are constantly madeavailable while the constructed SMT systems are usually static, so thatincorporating new data into existing SMT systems imposes systemdevelopers to re-train systems from scratch. In addition, the adaptationprocess of SMT systems is typically based on some available held-outdevelopment set and is performed once and for all.In this thesis, wepropose an on-demand framework that tackles the 3 above problemsjointly, to enable to develop SMT systems on a per-need with incremental updates and to adapt existing systems to each individual input text.The first main contribution of this thesis is devoted to a new on-demandword alignment method that aligns training sentence pairs in isolation.This property allows SMT systems to compute information on a per-needbasis and to seamlessly incorporate new available data into an exiting SMT system without re-training the whole systems. The second maincontribution of this thesis is the integration of contextual sampling strategies to select translation examples from large-scale corpora that are similar to the input text so as to build adapted phrase tables
339

Estimation of fatigue life by using a cyclic plasticity model and multiaxial notch correction

Johansson, Nils January 2019 (has links)
Mechanical components often possess notches. These notches give rise to stress concentrations, which in turn increases the likelihood that the material will undergo yielding. The finite element method (FEM) can be used to calculate transient stress and strain to be used in fatigue analyses. However, since yielding occurs, an elastic-plastic finite element analysis (FEA) must be performed. If the loading sequence to be analysed with respect to fatigue is long, the elastic-plastic FEA is often not a viable option because of its high computational requirements. In this thesis, a method that estimates the elastic-plastic stress and strain response as a result of input elastic stress and strain using plasticity modelling with the incremental Neuber rule has been derived and implemented. A numerical methodology to increase the accuracy when using the Neuber rule with cyclic loading has been proposed and validated for proportional loading. The results show fair albeit not ideal accuracy when compared to elastic-plastic finite element analysis. Different types of loading have been tested, including proportional and non-proportional as well as complex loadings with several load reversions. Based on the computed elastic-plastic stresses and strains, fatigue life is predicted by the critical plane method. Such a method has been reviewed, implemented and tested in this thesis. A comparison has been made between using a new damage parameter by Ince and an established damage parameter by Fatemi and Socie (FS). The implemented algorithm and damage parameters were evaluated by comparing the results of the program using either damage parameter to fatigue experiments of several different load cases, including non-proportional loading. The results are fairly accurate for both damage parameters, but the one by Ince tend to be slightly more accurate, if no fitted constant to use in the FS damage parameter can be obtained.
340

Contribution à la modélisation multi-échelle des matériaux composites / Contribution to the multiscale modeling of composite materials

Koutsawa-Tchalla, Adjovi Abueno Kanika C-M. 17 September 2015 (has links)
Nous proposons dans cette thèse diverses approches, pour l'amélioration de la modélisation et la simulation multi-échelle du comportement des matériaux composites. La modélisation précise et fiable de la réponse mécanique des matériaux composite demeure un défi majeur. L'objectif de ce travail est de développer des méthodologies simplifiées et basées sur des techniques d'homogénéisation existantes (numériques et analytiques) pour une prédiction efficiente du comportement non-linéaire de ces matériaux. Dans un premier temps un choix à été porté sur les techniques d'homogénéisation par champs moyens pour étudier le comportement élastoplastique et les phénomènes d'endommagement ductile dans les composites. Bien que restrictives, ces techniques demeurent les meilleures en termes de coût de calcul et d'efficacité. Deux méthodes ont été investiguées à cet effet: le Schéma Incrémental Micromécanique (SIM) en modélisation mono-site et le modèle Mori-Tanaka en modélisation multi-site (MTMS). Dans le cas d'étude du comportement élastoplastique, nous avons d'une part montré et validé par la méthode des éléments finis que la technique d'homogénéisation SIM donne un résultat plus précis de la modélisation des composites à fraction volumique élevée que celle de Mori-Tanaka, fréquemment utilisée dans la littérature. D'autre part nous avons étendu le modèle de Mori-Tanaka (M-T) généralement formulé en mono-site à la formulation en multi-site pour l'étude du comportement élastoplastique des composites à microstructure ordonnée. Cette approche montre que la formulation en multi-site produit des résultats concordants avec les solutions éléments finis et expérimentales. Dans la suite de nos travaux, le modèle d'endommagement ductile de Lemaître-Chaboche a été intégré à la modélisation du comportement élastoplastique dans les composites dans une modélisation multi-échelle basée sur le SIM. Cette dernière étude révèle la capacité du modèle SIM à capter les effets d'endommagement dans le matériau. Cependant, la question relative à la perte d'ellipticité n'a pas été abordée. Pour finir nous développons un outil d'homogénéisation numérique basé sur la méthode d'éléments finis multi-échelles (EF2) en 2D et 3D que nous introduisons dans le logiciel conventionnel ABAQUS via sa subroutine UMAT. Cette méthode (EF2) offre de nombreux avantages tels que la prise en compte de la non-linéarité du comportement et de l'évolution de la microstructure soumise à des conditions de chargement complexes. Les cas linéaires et non-linéaires ont été étudiés. L'avantage de cette démarche originale est la possibilité d'utilisation de toutes les ressources fournies par ce logiciel (un panel d'outils d'analyse ainsi qu'une librairie composée de divers comportements mécaniques, thermomécaniques ou électriques etc.) pour l'étude de problèmes multi-physiques. Ce travail a été validé dans le cas linéaire sur un exemple simple de poutre en flexion et comparé à la méthode multi-échelle ANM (Nezamabadi et al. (2009)). Un travail approfondi sera nécessaire ultérieurement avec des applications sur des problèmes non-linaires mettant en évidence la valeur de l'outil ainsi développé / We propose in this thesis several approaches for improving the multiscale modeling and simulation of composites’ behavior. Accurate and reliable modeling of the mechanical response of composite materials remains a major challenge. The objective of this work is to develop simplified methodologies based on existing homogenization techniques (numerical and analytical) for efficient prediction of nonlinear behavior of these materials. First choice has been focused on the Mean-field homogenization methods to study the elasto-plastic behavior and ductile damage phenomena in composites. Although restrictive, these techniques remain the best in terms of computational cost and efficiency. Two methods were investigated for this purpose: the Incremental Scheme Micromechanics (IMS) in One-site modeling and the Mori-Tanaka model in multi-site modeling (MTMS). In the framework of elastoplasticity, we have shown and validated by finite element method that the IMS homogenization results are more accurate, when dealing with high volume fraction composites, than the Mori-Tanaka model, frequently used in the literature. Furthermore, we have extended the Mori-Tanaka's model (MT) generally formulated in One-site to the multi-site formulation for the study of elasto-plastic behavior of composites with ordered microstructure. This approach shows that the multi-site formulation produces consistent results with respect to finite element and experimental solutions. In the continuation of our research, the Lemaître-Chaboche ductile damage model has been included to the study of elasto-plastic behavior in composite through the IMS homogenization. This latest investigation demonstrates the capability of the IMS model to capture damage effects in the material. However, the issue on the loss of ellipticity was not addressed. Finally we develop a numerical homogenization tool based on computational homogenization. This novel numerical tool works with 2D and 3D structure and is fully integrated in the conventional finite element code ABAQUS through its subroutine UMAT. The (FE2) method offers the advantage of being extremely accurate and allows the handling of more complex physics and geometrical nonlinearities. Linear and non-linear cases were studied. In addition, its combination with ABAQUS allows the use of major resources provided by this software (a panel of toolbox for various mechanical, thermomechanical and electrical analysis) for the study of multi-physics problems. This work was validated in the linear case on a two-scale analysis in bending and compared to the multi-scale method ANM (Nezamabadi et al. (2009)). Extensive work will be needed later with applications on non-linear problems to highlight the value of the developed tool

Page generated in 0.0801 seconds