• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 719
  • 238
  • 238
  • 121
  • 67
  • 48
  • 21
  • 19
  • 13
  • 10
  • 9
  • 8
  • 8
  • 8
  • 7
  • Tagged with
  • 1771
  • 529
  • 473
  • 274
  • 184
  • 139
  • 137
  • 117
  • 117
  • 115
  • 114
  • 109
  • 107
  • 102
  • 102
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Development of a Low-Power SRAM Compiler

Jagasivamani, Meenatchi 11 September 2000 (has links)
Considerable attention has been paid to the design of low-power, high-performance SRAMs (Static Random Access Memories) since they are a critical component in both hand-held devices and high-performance processors. A key in improving the performance of the system is to use an optimum sized SRAM. In this thesis, an SRAM compiler has been developed for the automatic layout of memory elements in the ASIC environment. The compiler generates an SRAM layout based on a given SRAM size, input by the user, with the option of choosing between fast vs. low-power SRAM. Array partitioning is used to partition the SRAM into blocks in order to reduce the total power consumption. Experimental results show that the low-power SRAM is capable of functioning at a minimum operating voltage of 2.1 V and dissipates 17.4 mW of average power at 20 MHz. In this report, we discuss the implementation of the SRAM compiler from the basic component to the top-level SKILL code functions, as well as simulation results and discussion. / Master of Science
222

Towards patient-tailored perimetry: automated perimetry can be improved by seeding procedures with patient-specific structural information

Denniss, Jonathan, McKendrick, A.M., Turpin, A. 31 May 2013 (has links)
No / To explore the performance of patient-specific prior information, for example, from structural imaging, in improving perimetric procedures. Computer simulation was used to determine the error distribution and presentation count for Structure–Zippy Estimation by Sequential Testing (ZEST), a Bayesian procedure with prior distribution centered on a threshold prediction from structure. Structure-ZEST (SZEST) was trialled for single locations with combinations of true and predicted thresholds between 1 to 35 dB, and compared with a standard procedure with variability similar to Swedish Interactive Thresholding Algorithm (SITA) (Full-Threshold, FT). Clinical tests of glaucomatous visual fields (n = 163, median mean deviation −1.8 dB, 90% range +2.1 to −22.6 dB) were also compared between techniques. For single locations, SZEST typically outperformed FT when structural predictions were within ± 9 dB of true sensitivity, depending on response errors. In damaged locations, mean absolute error was 0.5 to 1.8 dB lower, SD of threshold estimates was 1.2 to 1.5 dB lower, and 2 to 4 (29%–41%) fewer presentations were made for SZEST. Gains were smaller across whole visual fields (SZEST, mean absolute error: 0.5 to 1.2 dB lower, threshold estimate SD: 0.3 to 0.8 dB lower, 1 [17%] fewer presentation). The 90% retest limits of SZEST were median 1 to 3 dB narrower and more consistent (interquartile range 2–8 dB narrower) across the dynamic range than those for FT. Seeding Bayesian perimetric procedures with structural measurements can reduce test variability of perimetry in glaucoma, despite imprecise structural predictions of threshold. Structural data can reduce the variability of current perimetric techniques. A strong structure–function relationship is not necessary, however, structure must predict function within ±9 dB for gains to be realized.
223

Financial Risk Management of Guaranteed Minimum Income Benefits Embedded in Variable Annuities

Marshall, Claymore January 2011 (has links)
A guaranteed minimum income benefit (GMIB) is a long-dated option that can be embedded in a deferred variable annuity. The GMIB is attractive because, for policyholders who plan to annuitize, it offers protection against poor market performance during the accumulation phase, and adverse interest rate experience at annuitization. The GMIB also provides an upside equity guarantee that resembles the benefit provided by a lookback option. We price the GMIB, and determine the fair fee rate that should be charged. Due to the long dated nature of the option, conventional hedging methods, such as delta hedging, will only be partially successful. Therefore, we are motivated to find alternative hedging methods which are practicable for long-dated options. First, we measure the effectiveness of static hedging strategies for the GMIB. Static hedging portfolios are constructed based on minimizing the Conditional Tail Expectation of the hedging loss distribution, or minimizing the mean squared hedging loss. Next, we measure the performance of semi-static hedging strategies for the GMIB. We present a practical method for testing semi-static strategies applied to long term options, which employs nested Monte Carlo simulations and standard optimization methods. The semi-static strategies involve periodically rebalancing the hedging portfolio at certain time intervals during the accumulation phase, such that, at the option maturity date, the hedging portfolio payoff is equal to or exceeds the option value, subject to an acceptable level of risk. While we focus on the GMIB as a case study, the methods we utilize are extendable to other types of long-dated options with similar features.
224

Financial Risk Management of Guaranteed Minimum Income Benefits Embedded in Variable Annuities

Marshall, Claymore January 2011 (has links)
A guaranteed minimum income benefit (GMIB) is a long-dated option that can be embedded in a deferred variable annuity. The GMIB is attractive because, for policyholders who plan to annuitize, it offers protection against poor market performance during the accumulation phase, and adverse interest rate experience at annuitization. The GMIB also provides an upside equity guarantee that resembles the benefit provided by a lookback option. We price the GMIB, and determine the fair fee rate that should be charged. Due to the long dated nature of the option, conventional hedging methods, such as delta hedging, will only be partially successful. Therefore, we are motivated to find alternative hedging methods which are practicable for long-dated options. First, we measure the effectiveness of static hedging strategies for the GMIB. Static hedging portfolios are constructed based on minimizing the Conditional Tail Expectation of the hedging loss distribution, or minimizing the mean squared hedging loss. Next, we measure the performance of semi-static hedging strategies for the GMIB. We present a practical method for testing semi-static strategies applied to long term options, which employs nested Monte Carlo simulations and standard optimization methods. The semi-static strategies involve periodically rebalancing the hedging portfolio at certain time intervals during the accumulation phase, such that, at the option maturity date, the hedging portfolio payoff is equal to or exceeds the option value, subject to an acceptable level of risk. While we focus on the GMIB as a case study, the methods we utilize are extendable to other types of long-dated options with similar features.
225

Instrumentation d'essais en vol pour la localisation de décharges électrostatiques sur la surface d'un avion / Flight test instrumentation for the location of electrostatic discharges on the surface of an aircraft

Garcia Hallo, Ivan Vladimir 31 January 2017 (has links)
Le chargement électrostatique d’un avion en vol peut mener à des perturbations électromagnétiques sur les systèmes de communication et navigation à bord. Ce phénomène est appelé Precipitation Static (P-Static). Cette thèse vise à développer un outil capable de localiser la source des perturbations pour ainsi réduire le coût et la durée des missions d’investigation spécifiques en compagnie aérienne. Les principaux objectifs sont : •Comprendre les mécanismes de charge et décharge électrique d’un avion en vol. •Développer une instrumentation capable de mesurer les émissions des sources de P-Static et qui soit conforme à une installation sur avion. •Développer une méthode de localisation capable d’estimer la position de la source. Le comportement électrostatique d’un avion en vol a été étudié. Le défi de la mesure temporelle de l’impulsion générée par des décharges sur avion a été relevé à l’aide des antennes VHF, d’une chaine d’amplification à fort gain, de filtres sélectifs, d’un déclenchement de mesure automatique et de l’exploitation de nombreuses mesures. Les retards mesurés entre les impulsions ont été utilisés comme entrée de la méthode inverse de localisation développée. Cette méthode repose sur une base de données de retards, construite par modélisation de la propagation, permettant après comparaison avec la mesure, de déterminer la position de la source. Les tests effectués au laboratoire et sur avion au sol ont montré des résultats prometteurs puisque les zones estimées contenant la source correspondent à une zone réduite sur la surface de l’avion. / The static charging of an aircraft in flight may lead to electrostatic discharges that in turn lead to electromagnetic disturbances on aircraft radio and avionic systems. This phenomenon is called Precipitation Static (P-Static). This thesis aims to develop a tool capable of narrowing down the location of the source of disturbances in order to reduce the cost and duration of specific troubleshooting missions in airlines. The main objectives are : •To understand the electrical charging and discharging dynamics of an aircraft. •To develop an instrumentation capable of measuring the electromagnetic emissions of P-Static sources and that is compliant to aircraft installation constraints. •To develop a location method capable of estimating the position of the source. The electrostatic behaviour of an aircraft has been studied. The challenge in measuring the pulse generated by discharges on an aircraft was achieved by the combination of VHF antennas, a high gain amplifier chain, selective filters, automated triggering and numerous acquisitions. The delays obtained between the three pulses serve as input for the location inverse method developed. This method is based on a database of delays, built using propagation models, allowing after comparison with measurements, to determine the source position. The tests performed in laboratory and on aircraft on ground show promising results as the estimated zones containing the source correspond to a reduced zone on the surface of the aircraft.
226

Algorithmic Analysis of Name-Bounded Programs : From Java programs to Petri Nets via π-calculus

Settenvini, Matteo January 2014 (has links)
Context. Name-bounded analysis is a type of static analysis that allows us to take a concurrent program, abstract away from it, and check for some interesting properties, such as deadlock-freedom, or watching the propagation of variables across different components or layers of the system. Objectives. In this study we investigate the difficulties of giving a representation of computer programs in a name-bounded variation of π-calculus. Methods. A preliminary literature review is conducted to assess the presence (or lack thereof) of other successful translations from real-world programming languages to π-calculus, as well for the presence of relevant prior art in the modelling of concurrent systems. Results. This thesis gives a novel translation going from a relevant subset of the Java programming language, to its corresponding name-bounded π-calculus equivalent. In particular, the strengths of our translation are being able to dispose of names representing inactive objects when there are no circular references, and a transparent handling of polymorphism and dynamic method resolution. The resulting processes can then be further transformed into their Petri-Net representation, enabling us to check for important properties, such as reachability and coverability of program states. Conclusions. We conclude that some important properties that are not, in general, easy to check for concurrent programs, can be in fact be feasibly determined by giving a more constrained model in π-calculus first, and as Petri Nets afterwards. / +49 151 52966429
227

A Security Related and Evidence-Based Holistic Ranking and Composition Framework for Distributed Services

Chowdhury, Nahida Sultana 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The number of smart mobile devices has grown at a significant rate in recent years. This growth has resulted in an exponential number of publicly available mobile Apps. To help the selection of suitable Apps, from various offered choices, the App distribution platforms generally rank/recommend Apps based on average star ratings, the number of installs, and associated reviews ― all the external factors of an App. However, these ranking schemes typically tend to ignore critical internal factors (e.g., bugs, security vulnerabilities, and data leaks) of the Apps. The AppStores need to incorporate a holistic methodology that includes internal and external factors to assign a level of trust to Apps. The inclusion of the internal factors will describe associated potential security risks. This issue is even more crucial with newly available Apps, for which either user reviews are sparse, or the number of installs is still insignificant. In such a scenario, users may fail to estimate the potential risks associated with installing Apps that exist in an AppStore. This dissertation proposes a security-related and evidence-based ranking framework, called SERS (Security-related and Evidence-based Ranking Scheme) to compare similar Apps. The trust associated with an App is calculated using both internal and external factors (i.e., security flaws and user reviews) following an evidence-based approach and applying subjective logic principles. The SERS is formalized and further enhanced in the second part of this dissertation, resulting in its enhanced version, called as E-SERS (Enhanced SERS). These enhancements include an ability to integrate any number of sources that can generate evidence for an App and consider the temporal aspect and reputation of evidence sources. Both SERS and E-SERS are evaluated using publicly accessible Apps from the Google PlayStore and the rankings generated by them are compared with prevalent ranking techniques such as the average star ratings and the Google PlayStore Rankings. The experimental results indicate that E-SERS provides a comprehensive and holistic view of an App when compared with prevalent alternatives. E-SERS is also successful in identifying malicious Apps where other ranking schemes failed to address such vulnerabilities. In the third part of this dissertation, the E-SERS framework is used to propose a trust-aware composition model at two different granularities. This model uses the trust score computed by E-SERS, along with the probability of an App belonging to the malicious category, as the desired attributes for selecting a composition as the two granularities. Finally, the trust-aware composition model is evaluated with the average star rating parameter and the trust score. A holistic approach, as proposed by E-SERS, to computer a trust score will benefit all kinds of Apps including newly published Apps that follow proper security measures but initially struggle in the AppStore rankings due to a lack of a large number of reviews and ratings. Hence, E-SERS will be helpful both to the developers and users. In addition, the composition model that uses such a holistic trust score will enable system integrators to create trust-aware distributed systems for their specific needs.
228

High-Resolution, Non-contact Angular Measurement System for PSA/RSA

Sloat, Ronald D 01 March 2011 (has links) (PDF)
A non-contact angular measurement system for Pitch Static Attitude (PSA) and Roll Static Attitude (RSA) of hard disk drive sliders is designed and built. Real-time sampling at over 15 KHz is achieved with accuracy of +/- 0.05 degrees over a range of approximately 2-3 degrees. Measuring the PSA and RSA is critical for hard drive manufacturers to control and improve the quality and reliability of hard drives. Although the hard drive industry is able to measure the PSA and RSA at the subassembly level at this time, there is no system available that is able to measure PSA/RSA at the final assembly level. This project has successfully demonstrated a methodology that the PSA/RSA can be reliably measured in-situ using a laser and position sensitive detector (PSD) technology. A prototype of the measurement system has been built using simple and inexpensive equipment. This device will allow a continuous measurement between the parked position on the ramp and the loading position just off of the disk surface. The measured data can be used to verify manufacturing processes and reliability data.
229

Decisões de financiamento em empresas brasileiras: uma comparação entre a static tradeoff e a pecking order theory no Brasil / Financial decisions in Brazilian companies: a comparison between the static tradeoff and pecking order theory in Brazil

Amaral, Paulo Ferreira 11 March 2011 (has links)
A comparação entre duas teorias na área de finanças sobre estrutura de capital nas empresas é o objetivo deste trabalho. Usando testes desenvolvidos por Shyam-Sunder & Myers (1999) e Rajan & Zingales (1995), os dados de empresas brasileiras, não financeiras, de capital aberto foram analisados entre os anos de 2000 e 2010 para verificar se preferiram os comportamentos previstos na Static Trade-off Theory ou os da Pecking Order Theory. As maneiras de se financiar e as causas e conseqüências dessas decisões nas empresas são importantes questões que vêm sendo debatidas em inúmeros trabalhos acadêmicos. Este trabalho procurou analisar a bibliografia relacionada ao tema e replicar testes realizados no exterior, visando verificar as semelhanças, diferenças e os motivos relacionados a tais resultados. Os resultados obtidos apontam para a provável preferência do comportamento previsto pela Pecking Order Theory, isto é, as empresas estudadas, no período analisado, usaram, em primeiro lugar, recursos gerados internamente (caixa operacional), usando em segundo lugar recursos de terceiros, por meio de empréstimos bancários ou emissão de debêntures, somente emitindo ações como última alternativa. Outra conclusão foi que as empresas brasileiras de capital aberto provavelmente não procuram alcançar ou manter uma meta ideal de endividamento, que equilibre os custos e benefícios gerados pelos empréstimos. / The comparison between two theories in the finance area of capital structure in business is the goal of this work. Using tests developed by Shyam-Sunder & Myers (1999) and Rajan & Zingales (1995), the data of Brazilian non-financial publicly traded were analyzed between the years 2000 and 2010 to determine whether they preferred the expected behaviors in the Static Trade-off Theory or the Pecking Order Theory. The ways to finance and the causes and consequences of these decisions in organizations are important issues that have been discussed in numerous scholarly works. This study sought to examine the literature related to the theme and replicating tests performed abroad in order to verify the similarities, differences and the reasons related to such results. The results indicate the problabe preference behavior provided by Pecking Order Theory, ie the companies studied in the period analyzed, used, first, internally generated funds (operating cash), second using third-party funds through bank loans or issuance of bonds or issuance of bonds, sending shares only as a last resort. Another conclusion is that Brazilian companies traded problaby did not seek to achieve or maintain an ideal goal of indebtedness, wich balances the costs and benefits generated by the loans.
230

Optimisation de forme par gradient en dynamique rapide

Genest, Laurent 19 July 2016 (has links)
Afin de faire face aux nouveaux challenges de l’industrie automobile, les ingénieurs souhaitent appliquer des méthodes d’optimisation à chaque étape du processus de conception. En élargissant l’espace de conception aux paramètres de forme, en augmentant leur nombre et en étendant les plages de variation, de nouveaux verrous sont apparus. C’est le cas de la résistance aux chocs. Avec les temps de calcul long, la non-linéarité, l’instabilité et la dispersion numérique de ce problème de dynamique rapide, la méthode usuellement employée, l’optimisation par plan d’expériences et surfaces de réponse, devient trop coûteuse pour être utilisée industriellement. Se pose alors la problématique suivante : Comment faire de l’optimisation de forme en dynamique rapide avec un nombre élevé de paramètres ?. Pour y répondre, les méthodes d’optimisation par gradient s’avèrent être les plus judicieuses. Le nombre de paramètres a une influence réduite sur le coût de l’optimisation. Elles permettent donc l’optimisation de problèmes ayant de nombreux paramètres. Cependant, les méthodes classiques de calcul du gradient sont peu pertinentes en dynamique rapide : le coût en nombre de simulations et le bruit empêchent l’utilisation des différences finies et le calcul du gradient en dérivant les équations de dynamique rapide n’est pas encore disponible et serait très intrusif vis-à-vis des logiciels. Au lieu de déterminer le gradient, au sens classique du terme, des problèmes de crash, nous avons cherché à l’estimer. L’Equivalent Static Loads Method est une méthode permettant l’optimisation à moindre coût basée sur la construction d’un problème statique linéaire équivalent au problème de dynamique rapide. En utilisant la dérivée du problème équivalent comme estimation du gradient, il nous a été possible d’optimiser des problèmes de dynamique rapide ayant des épaisseurs comme variables d’optimisation. De plus, si l’on construit les équations du problème équivalent avec la matrice de rigidité sécante, l’approximation du gradient n’en est que meilleure. De cette manière, il est aussi possible d’estimer le gradient par rapport à la position des nœuds du modèle de calcul. Comme il est plus courant de travailler avec des paramètres CAO, il faut déterminer la dérivée de la position des nœuds par rapport à ces paramètres. Nous pouvons le faire de manière analytique si nous utilisons une surface paramétrique pour définir la forme et ses points de contrôle comme variables d’optimisation. Grâce à l’estimation du gradient et à ce lien entre nœuds et paramètres de forme, l’optimisation de forme avec un nombre important de paramètres est désormais possible à moindre coût. La méthode a été développée pour deux familles de critères issues du crash automobile. La première est liée au déplacement d’un nœud, objectif important lorsqu’il faut préserver l’intégrité de l’habitacle du véhicule. La seconde est liée à l’énergie de déformation. Elle permet d’assurer un bon comportement de la structure lors du choc. / In order to face their new industrial challenges, automotive constructors wish to apply optimization methods in every step of the design process. By including shape parameters in the design space, increasing their number and their variation range, new problematics appeared. It is the case of crashworthiness. With the high computational time, the nonlinearity, the instability and the numerical dispersion of this rapid dynamics problem, metamodeling techniques become to heavy for the standardization of those optimization methods. We face this problematic: ”How can we carry out shape optimization in rapid dynamics with a high number of parameters ?”. Gradient methods are the most likely to solve this problematic. Because the number of parameters has a reduced effect on the optimization cost, they allow optimization with a high number of parameters. However, conventional methods used to calculate gradients are ineffective: the computation cost and the numerical noise prevent the use of finite differences and the calculation of a gradient by deriving the rapid dynamics equations is not currently available and would be really intrusive towards the software. Instead of determining the real gradient, we decided to estimate it. The Equivalent Static Loads Method is an optimization method based on the construction of a linear static problem equivalent to the rapid dynamic problem. By using the sensitivity of the equivalent problem as the estimated gradient, we have optimized rapid dynamic problems with thickness parameters. It is also possible to approximate the derivative with respect to the position of the nodes of the CAE model. But it is more common to use CAD parameters in shape optimization studies. So it is needed to have the sensitivity of the nodes position with these CAD parameters. It is possible to obtain it analytically by using parametric surface for the shape and its poles as parameters. With this link between nodes and CAD parameters, we can do shape optimization studies with a large number of parameters and this with a low optimization cost. The method has been developed for two kinds of crashworthiness objective functions. The first family of criterions is linked to a nodal displacement. This category contains objectives like the minimization of the intrusion inside the passenger compartment. The second one is linked to the absorbed energy. It is used to ensure a good behavior of the structure during the crash.

Page generated in 0.0376 seconds