Spelling suggestions: "subject:"blackbox"" "subject:"black7box""
151 |
Combined Actuarial Neural Networks in Actuarial Rate Making / Kombinerade aktuariska neurala nätverk i aktuarisk tariffanalysGustafsson, Axel, Hansen, Jacob January 2021 (has links)
Insurance is built on the principle that a group of people contributes to a common pool of money which will be used to cover the costs for individuals who suffer from the insured event. In a competitive market, an insurance company will only be profitable if their pricing reflects the covered risks as good as possible. This thesis investigates the recently proposed Combined Actuarial Neural Network (CANN), a model nesting the traditional Generalised Linear Model (GLM) used in insurance pricing into a Neural Network (NN). The main idea of utilising NNs for insurance pricing is to model interactions between features that the GLM is unable to capture. The CANN model is analysed in a commercial insurance setting with respect to two research questions. The first research question, RQ 1, seeks to answer if the CANN model can outperform the underlying GLM with respect to error metrics and actuarial model evaluation tools. The second research question, RQ 2, seeks to identify existing interpretability methods that can be applied to the CANN model and also showcase how they can be applied. The results for RQ 1 show that CANN models are able to consistently outperform the GLM with respect to chosen model evaluation tools. A literature search is conducted to answer RQ 2, identifying interpretability methods that either are applicable or are possibly applicable to the CANN model. One interpretability method is also proposed in this thesis specifically for the CANN model, using model-fitted averages on two-dimensional segments of the data. Three interpretability methods from the literature search and the one proposed in this thesis are demonstrated, illustrating how these may be applied. / Försäkringar bygger på principen att en grupp människor bidrar till en gemensam summa pengar som används för att täcka kostnader för individer som råkar ut för den försäkrade händelsen. I en konkurrensutsatt marknad kommer försäkringsbolag endast vara lönsamma om deras prissättning är så bra som möjligt. Denna uppsats undersöker den nyligen föreslagna Combined Actuarial Neural Network (CANN) modellen som bygger in en Generalised Linear Model (GLM) i ett neuralt nätverk, i en praktiskt och kommersiell försäkringskontext med avseende på två forskningsfrågor. Huvudidén för en CANN modell är att fånga interaktioner mellan variabler, vilket en GLM inte automatiskt kan göra. Forskningsfråga 1 ämnar undersöka huruvida en CANN modell kan prestera bättre än en GLM med avseende på utvalda statistiska prestationsmått och modellutvärderingsverktyg som används av aktuarier. Forskningsfråga 2 ämnar identifiera några tolkningsverktyg som kan appliceras på CANN modellen samt demonstrera hur de kan användas. Resultaten för Forskningsfråga 1 visar att CANN modellen kan prestera bättre än en GLM. En literatursökning genomförs för att svara på Forskningsfråga 2, och ett antal tolkningsverktyg identifieras. Ett tolkningsverktyg föreslås också i denna uppsats specifikt för att tolka CANN modellen. Tre av tolkningsverktygen samt det utvecklade verktyget demonstreras för att visa hur de kan användas för att tolka CANN modellen.
|
152 |
Practical and Foundational Aspects of Secure ComputationRanellucci, Samuel 02 1900 (has links)
Il y a des problemes qui semblent impossible a resoudre sans l'utilisation d'un tiers parti
honnete. Comment est-ce que deux millionnaires peuvent savoir qui est le plus riche sans dire a l'autre la valeur de ses biens ? Que peut-on faire pour prevenir les collisions de satellites quand
les trajectoires sont secretes ? Comment est-ce que les chercheurs peuvent apprendre les liens
entre des medicaments et des maladies sans compromettre les droits prives du patient ? Comment
est-ce qu'une organisation peut ecmpecher le gouvernement d'abuser de l'information
dont il dispose en sachant que l'organisation doit n'avoir aucun acces a cette information ?
Le Calcul multiparti, une branche de la cryptographie, etudie comment creer des protocoles
pour realiser de telles taches sans l'utilisation d'un tiers parti honnete.
Les protocoles doivent etre prives, corrects, efficaces et robustes. Un protocole est prive
si un adversaire n'apprend rien de plus que ce que lui donnerait un tiers parti honnete. Un
protocole est correct si un joueur honnete recoit ce que lui donnerait un tiers parti honnete.
Un protocole devrait bien sur etre efficace. Etre robuste correspond au fait qu'un protocole
marche meme si un petit ensemble des joueurs triche. On demontre que sous l'hypothese d'un
canal de diusion simultane on peut echanger la robustesse pour la validite et le fait d'etre
prive contre certains ensembles d'adversaires.
Le calcul multiparti a quatre outils de base : le transfert inconscient, la mise en gage, le
partage de secret et le brouillage de circuit. Les protocoles du calcul multiparti peuvent etre
construits avec uniquements ces outils. On peut aussi construire les protocoles a partir d'hypoth
eses calculatoires. Les protocoles construits a partir de ces outils sont souples et peuvent
resister aux changements technologiques et a des ameliorations algorithmiques. Nous nous
demandons si l'efficacite necessite des hypotheses de calcul. Nous demontrons que ce n'est
pas le cas en construisant des protocoles efficaces a partir de ces outils de base.
Cette these est constitue de quatre articles rediges en collaboration avec d'autres chercheurs.
Ceci constitue la partie mature de ma recherche et sont mes contributions principales
au cours de cette periode de temps. Dans le premier ouvrage presente dans cette these, nous
etudions la capacite de mise en gage des canaux bruites. Nous demontrons tout d'abord une
limite inferieure stricte qui implique que contrairement au transfert inconscient, il n'existe
aucun protocole de taux constant pour les mises en gage de bit. Nous demontrons ensuite que,
en limitant la facon dont les engagements peuvent etre ouverts, nous pouvons faire mieux et
meme un taux constant dans certains cas. Ceci est fait en exploitant la notion de cover-free
families . Dans le second article, nous demontrons que pour certains problemes, il existe un
echange entre robustesse, la validite et le prive. Il s'effectue en utilisant le partage de secret
veriable, une preuve a divulgation nulle, le concept de fantomes et une technique que nous
appelons les balles et les bacs. Dans notre troisieme contribution, nous demontrons qu'un
grand nombre de protocoles dans la litterature basee sur des hypotheses de calcul peuvent
etre instancies a partir d'une primitive appelee Transfert Inconscient Veriable, via le concept
de Transfert Inconscient Generalise. Le protocole utilise le partage de secret comme outils de
base. Dans la derniere publication, nous counstruisons un protocole efficace avec un nombre
constant de rondes pour le calcul a deux parties. L'efficacite du protocole derive du fait qu'on
remplace le coeur d'un protocole standard par une primitive qui fonctionne plus ou moins
bien mais qui est tres peu couteux. On protege le protocole contre les defauts en utilisant le
concept de privacy amplication . / There are seemingly impossible problems to solve without a trusted third-party. How can
two millionaires learn who is the richest when neither is willing to tell the other how rich
he is? How can satellite collisions be prevented when the trajectories are secret? How can
researchers establish correlations between diseases and medication while respecting patient
confidentiality? How can an organization insure that the government does not abuse the
knowledge that it possesses even though such an organization would be unable to control
that information? Secure computation, a branch of cryptography, is a eld that studies how
to generate protocols for realizing such tasks without the use of a trusted third party. There
are certain goals that such protocols should achieve. The rst concern is privacy: players
should learn no more information than what a trusted third party would give them. The
second main goal is correctness: players should only receive what a trusted third party would
give them. The protocols should also be efficient. Another important property is robustness,
the protocols should not abort even if a small set of players is cheating.
Secure computation has four basic building blocks : Oblivious Transfer, secret sharing,
commitment schemes, and garbled circuits. Protocols can be built based only on these building
blocks or alternatively, they can be constructed from specific computational assumptions.
Protocols constructed solely from these primitives are
flexible and are not as vulnerable to
technological or algorithmic improvements. Many protocols are nevertheless based on computational
assumptions. It is important to ask if efficiency requires computational assumptions.
We show that this is not the case by building efficient protocols from these primitives. It is
the conclusion of this thesis that building protocols from black-box primitives can also lead
to e cient protocols.
This thesis is a collection of four articles written in collaboration with other researchers.
This constitutes the mature part of my investigation and is my main contributions to the
field during that period of time. In the first work presented in this thesis we study the commitment
capacity of noisy channels. We first show a tight lower bound that implies that in
contrast to Oblivious Transfer, there exists no constant rate protocol for bit commitments.
We then demonstrate that by restricting the way the commitments can be opened, we can
achieve better efficiency and in particular cases, a constant rate. This is done by exploiting
the notion of cover-free families. In the second article, we show that for certain problems,
there exists a trade-off between robustness, correctness and privacy. This is done by using
verifiable secret sharing, zero-knowledge, the concept of ghosts and a technique which we call
\balls and bins". In our third contribution, we show that many protocols in the literature
based on specific computational assumptions can be instantiated from a primitive known as
Verifiable Oblivious Transfer, via the concept of Generalized Oblivious Transfer. The protocol
uses secret sharing as its foundation. In the last included publication, we construct a
constant-round protocol for secure two-party computation that is very efficient and only uses
black-box primitives. The remarkable efficiency of the protocol is achieved by replacing the
core of a standard protocol by a faulty but very efficient primitive. The fault is then dealt
with by a non-trivial use of privacy amplification.
|
153 |
Surrogate-Assisted Evolutionary Algorithms / Les algorithmes évolutionnaires à la base de méta-modèles scalairesLoshchilov, Ilya 08 January 2013 (has links)
Les Algorithmes Évolutionnaires (AEs) ont été très étudiés en raison de leur capacité à résoudre des problèmes d'optimisation complexes en utilisant des opérateurs de variation adaptés à des problèmes spécifiques. Une recherche dirigée par une population de solutions offre une bonne robustesse par rapport à un bruit modéré et la multi-modalité de la fonction optimisée, contrairement à d'autres méthodes d'optimisation classiques telles que les méthodes de quasi-Newton. La principale limitation de AEs, le grand nombre d'évaluations de la fonction objectif,pénalise toutefois l'usage des AEs pour l'optimisation de fonctions chères en temps calcul.La présente thèse se concentre sur un algorithme évolutionnaire, Covariance Matrix Adaptation Evolution Strategy (CMA-ES), connu comme un algorithme puissant pour l'optimisation continue boîte noire. Nous présentons l'état de l'art des algorithmes, dérivés de CMA-ES, pour résoudre les problèmes d'optimisation mono- et multi-objectifs dans le scénario boîte noire.Une première contribution, visant l'optimisation de fonctions coûteuses, concerne l'approximation scalaire de la fonction objectif. Le meta-modèle appris respecte l'ordre des solutions (induit par la valeur de la fonction objectif pour ces solutions); il est ainsi invariant par transformation monotone de la fonction objectif. L'algorithme ainsi défini, saACM-ES, intègre étroitement l'optimisation réalisée par CMA-ES et l'apprentissage statistique de meta-modèles adaptatifs; en particulier les meta-modèles reposent sur la matrice de covariance adaptée par CMA-ES. saACM-ES préserve ainsi les deux propriété clé d'invariance de CMA-ES: invariance i) par rapport aux transformations monotones de la fonction objectif; et ii) par rapport aux transformations orthogonales de l'espace de recherche.L'approche est étendue au cadre de l'optimisation multi-objectifs, en proposant deux types de meta-modèles (scalaires). La première repose sur la caractérisation du front de Pareto courant (utilisant une variante mixte de One Class Support Vector Machone (SVM) pour les points dominés et de Regression SVM pour les points non-dominés). La seconde repose sur l'apprentissage d'ordre des solutions (rang de Pareto) des solutions. Ces deux approches sont intégrées à CMA-ES pour l'optimisation multi-objectif (MO-CMA-ES) et nous discutons quelques aspects de l'exploitation de meta-modèles dans le contexte de l'optimisation multi-objectif.Une seconde contribution concerne la conception d'algorithmes nouveaux pour l'optimi\-sation mono-objectif, multi-objectifs et multi-modale, développés pour comprendre, explorer et élargir les frontières du domaine des algorithmes évolutionnaires et CMA-ES en particulier. Spécifiquement, l'adaptation du système de coordonnées proposée par CMA-ES est coupléeà une méthode adaptative de descente coordonnée par coordonnée. Une stratégie adaptative de redémarrage de CMA-ES est proposée pour l'optimisation multi-modale. Enfin, des stratégies de sélection adaptées aux cas de l'optimisation multi-objectifs et remédiant aux difficultés rencontrées par MO-CMA-ES sont proposées. / Evolutionary Algorithms (EAs) have received a lot of attention regarding their potential to solve complex optimization problems using problem-specific variation operators. A search directed by a population of candidate solutions is quite robust with respect to a moderate noise and multi-modality of the optimized function, in contrast to some classical optimization methods such as quasi-Newton methods. The main limitation of EAs, the large number of function evaluations required, prevents from using EAs on computationally expensive problems, where one evaluation takes much longer than 1 second.The present thesis focuses on an evolutionary algorithm, Covariance Matrix Adaptation Evolution Strategy (CMA-ES), which has become a standard powerful tool for continuous black-box optimization. We present several state-of-the-art algorithms, derived from CMA-ES, for solving single- and multi-objective black-box optimization problems.First, in order to deal with expensive optimization, we propose to use comparison-based surrogate (approximation) models of the optimized function, which do not exploit function values of candidate solutions, but only their quality-based ranking.The resulting self-adaptive surrogate-assisted CMA-ES represents a tight coupling of statistical machine learning and CMA-ES, where a surrogate model is build, taking advantage of the function topology given by the covariance matrix adapted by CMA-ES. This allows to preserve two key invariance properties of CMA-ES: invariance with respect to i). monotonous transformation of the function, and ii). orthogonal transformation of the search space. For multi-objective optimization we propose two mono-surrogate approaches: i). a mixed variant of One Class Support Vector Machine (SVM) for dominated points and Regression SVM for non-dominated points; ii). Ranking SVM for preference learning of candidate solutions in the multi-objective space. We further integrate these two approaches into multi-objective CMA-ES (MO-CMA-ES) and discuss aspects of surrogate-model exploitation.Second, we introduce and discuss various algorithms, developed to understand, explore and expand frontiers of the Evolutionary Computation domain, and CMA-ES in particular. We introduce linear time Adaptive Coordinate Descent method for non-linear optimization, which inherits a CMA-like procedure of adaptation of an appropriate coordinate system without losing the initial simplicity of Coordinate Descent.For multi-modal optimization we propose to adaptively select the most suitable regime of restarts of CMA-ES and introduce corresponding alternative restart strategies.For multi-objective optimization we analyze case studies, where original parent selection procedures of MO-CMA-ES are inefficient, and introduce reward-based parent selection strategies, focused on a comparative success of generated solutions.
|
154 |
Contribution à l'estimation d'état et au diagnostic des systèmes représentés par des multimodèles / A contribution to state estimation and diagnosis of systems modelled by multiple modelsOrjuela, Rodolfo 06 November 2008 (has links)
Nombreux sont les problèmes classiquement rencontrés dans les sciences de l'ingénieur dont la résolution fait appel à l'estimation d'état d'un système par le biais d'un observateur. La synthèse d'un observateur n'est envisageable qu'à la condition de disposer d'un modèle à la fois exploitable et représentatif du comportement dynamique du système. Or, la modélisation du système et la synthèse de l'observateur deviennent des tâches difficiles à accomplir dès lors que le comportement dynamique du système doit être représenté par un modèle de nature non linéaire. Face à ces difficultés, l'approche multimodèle peut être mise à profit. Les travaux présentés dans cette thèse portent sur les problèmes soulevés par l'identification, l'estimation d'état et le diagnostic de systèmes non linéaires représentés à l'aide d'un multimodèle découplé. Ce dernier, composé de sous-modèles qui peuvent être de dimensions différentes, est doté d'un haut degré de généralité et de flexibilité et s'adapte particulièrement bien à la modélisation des systèmes complexes à structure variable. Cette caractéristique le démarque des approches multimodèles plus conventionnelles qui ont recours à des sous-modèles de même dimension. Après une brève introduction à l'approche multimodèle, le problème de l'estimation paramétrique du multimodèle découplé est abordé. Puis sont présentés des algorithmes de synthèse d'observateurs d'état robustes vis-à-vis des perturbations, des incertitudes paramétriques et des entrées inconnues affectant le système. Ces algorithmes sont élaborés à partir de trois types d'observateurs dits à gain proportionnel, à gain proportionnel-intégral et à gain multi-intégral. Enfin, les différentes phases d'identification, de synthèse d'observateurs et de génération d'indicateurs de défauts sont illustrées au moyen d'un exemple académique de diagnostic du fonctionnement d'un bioréacteur / The state estimation of a system, with the help of an observer, is largely used in many practical situations in order to cope with many classic problems arising in control engineering. The observer design needs an exploitable model able to give an accurate description of the dynamic behaviour of the system. However, system modelling and observer design can not easily be accomplished when the dynamic behaviour of the system must be described by non linear models. The multiple model approach can be used to tackle these difficulties. This thesis deals with black box modelling, state estimation and fault diagnosis of nonlinear systems represented by a decoupled multiple model. This kind of multiple model provides a high degree of generality and flexibility in the modelling stage. Indeed, the decoupled multiple model is composed of submodels which dimensions can be different. Thus, this feature is a significant difference between the decoupled multiple model and the classical used multiple model where all the submodels have the same dimension. After a brief introduction to the multiple model approach, the parametric identification problem of a decoupled multiple model is explored. Algorithms for robust observers synthesis with respect to perturbations, modelling uncertainties and unknown inputs are afterwards presented. These algorithms are based on three kinds of observers called proportional, proportional-integral and multiple-integral. Lastly, identification, observers synthesis and fault sensitivity signals generation are illustrated via a simulation example of a bioreactor
|
155 |
Modélisation CEM des équipements aéronautiques : aide à la qualification de l’essai BCI / EMC modeling of aeronautical equipment : support for the qualification of the BCI testCheaito, Hassan 06 November 2017 (has links)
L’intégration de l’électronique dans des environnements sévères d’un point de vue électromagnétique a entraîné en contrepartie l’apparition de problèmes de compatibilité électromagnétique (CEM) entre les différents systèmes. Afin d’atteindre un niveau de performance satisfaisant, des tests de sécurité et de certification sont nécessaires. Ces travaux de thèse, réalisés dans le cadre du projet SIMUCEDO (SIMUlation CEM basée sur la norme DO-160), contribuent à la modélisation du test de qualification "Bulk Current Injection" (BCI). Ce test, abordé dans la section 20 dans la norme DO-160 dédiée à l’aéronautique, est désormais obligatoire pour une très grande gamme d’équipements aéronautiques. Parmi les essais de qualification, le test BCI est l’un des plus contraignants et consommateurs du temps. Sa modélisation assure un gain de temps, et une meilleure maîtrise des paramètres qui influencent le passage des tests CEM. La modélisation du test a été décomposée en deux parties : l’équipement sous test (EST) d’une part, et la pince d’injection avec les câbles d’autre part. Dans cette thèse, seul l’EST est pris en compte. Une modélisation "boîte grise" a été proposée en associant un modèle "boîte noire" avec un modèle "extensif". Le modèle boîte noire s’appuie sur la mesure des impédances standards. Son identification se fait avec un modèle en pi. Le modèle extensif permet d’étudier plusieurs configurations de l’EST en ajustant les paramètres physiques. L’assemblage des deux modèles en un modèle boîte grise a été validé sur un convertisseur analogique-numérique (CAN). Une autre approche dénommée approche modale en fonction du mode commun (MC) et du mode différentiel (MD) a été proposée. Elle se base sur les impédances modales du système sous test. Des PCB spécifiques ont été conçus pour valider les équations développées. Une investigation est menée pour définir rigoureusement les impédances modales. Nous avons démontré qu’il y a une divergence entre deux définitions de l’impédance de MC dans la littérature. Ainsi, la conversion de mode (ou rapport Longitudinal Conversion Loss : LCL) a été quantifiée grâce à ces équations. Pour finir, le modèle a été étendu à N-entrées pour représenter un EST de complexité industrielle. Le modèle de l’EST est ensuite associé avec celui de la pince et des câbles travaux réalisés au G2ELAB. Des mesures expérimentales ont été faites pour valider le modèle complet. D’après ces mesures, le courant de MC est impacté par la mise en œuvre des câbles ainsi que celle de l’EST. Il a été montré que la connexion du blindage au plan de masse est le paramètre le plus impactant sur la distribution du courant de MC. / Electronic equipments intended to be integrated in aircrafts are subjected to normative requirements. EMC (Electromagnetic Compatibility) qualification tests became one of the mandatory requirements. This PhD thesis, carried out within the framework of the SIMUCEDO project (SIMulation CEM based on the DO-160 standard), contributes to the modeling of the Bulk Current Injection (BCI) qualification test. Concept, detailed in section 20 in the DO-160 standard, is to generate a noise current via cables using probe injection, then monitor EUT satisfactorily during test. Among the qualification tests, the BCI test is one of the most constraining and time consuming. Thus, its modeling ensures a saving of time, and a better control of the parameters which influence the success of the equipment under test. The modeling of the test was split in two parts : the equipment under test (EUT) on one hand, and the injection probe with the cables on the other hand. This thesis focuses on the EUT modeling. A "gray box" modeling was proposed by associating the "black box" model with the "extensive" model. The gray box is based on the measurement of standard impedances. Its identification is done with a "pi" model. The model, having the advantage of taking into account several configurations of the EUT, has been validated on an analog to digital converter (ADC). Another approach called modal, in function of common mode and differential mode, has been proposed. It takes into account the mode conversion when the EUT is asymmetrical. Specific PCBs were designed to validate the developed equations. An investigation was carried out to rigorously define the modal impedances, in particular the common mode (CM) impedance. We have shown that there is a discrepancy between two definitions of CM impedance in the literature. Furthermore, the mode conversion ratio (or the Longitudinal Conversion Loss : LCL) was quantified using analytical equations based on the modal approach. An N-input model has been extended to include industrial complexity. The EUT model is combined with the clamp and the cables model (made by the G2ELAB laboratory). Experimental measurements have been made to validate the combined model. According to these measurements, the CM current is influenced by the setup of the cables as well as the EUT. It has been shown that the connection of the shield to the ground plane is the most influent parameter on the CM current distribution.
|
156 |
A Comprehensive Embodied Energy Analysis FrameworkTreloar, Graham John, kimg@deakin.edu.au,jillj@deakin.edu.au,mikewood@deakin.edu.au,wildol@deakin.edu.au January 1998 (has links)
The assessment of the direct and indirect requirements for energy is known as embodied energy analysis. For buildings, the direct energy includes that used primarily on site, while the indirect energy includes primarily the energy required for the manufacture of building materials. This thesis is concerned with the completeness and reliability of embodied energy analysis methods. Previous methods tend to address either one of these issues, but not both at the same time. Industry-based methods are incomplete. National statistical methods, while comprehensive, are a black box and are subject to errors. A new hybrid embodied energy analysis method is derived to optimise the benefits of previous methods while minimising their flaws.
In industry-based studies, known as process analyses, the energy embodied in a product is traced laboriously upstream by examining the inputs to each preceding process towards raw materials. Process analyses can be significantly incomplete, due to increasing complexity. The other major embodied energy analysis method, input-output analysis, comprises the use of national statistics. While the input-output framework is comprehensive, many inherent assumptions make the results unreliable.
Hybrid analysis methods involve the combination of the two major embodied energy analysis methods discussed above, either based on process analysis or input-output analysis. The intention in both hybrid analysis methods is to reduce errors associated with the two major methods on which they are based. However, the problems inherent to each of the original methods tend to remain, to some degree, in the associated hybrid versions.
Process-based hybrid analyses tend to be incomplete, due to the exclusions associated with the process analysis framework. However, input-output-based hybrid analyses tend to be unreliable because the substitution of process analysis data into the input-output framework causes unwanted indirect effects.
A key deficiency in previous input-output-based hybrid analysis methods is that the input-output model is a black box, since important flows of goods and services with respect to the embodied energy of a sector cannot be readily identified. A new input-output-based hybrid analysis method was therefore developed, requiring the decomposition of the input-output model into mutually exclusive components (ie, direct energy paths).
A direct energy path represents a discrete energy requirement, possibly occurring one or more transactions upstream from the process under consideration. For example, the energy required directly to manufacture the steel used in the construction of a building would represent a direct energy path of one non-energy transaction in length. A direct energy path comprises a product quantity (for example, the total tonnes of cement used) and a direct energy intensity (for example, the energy required directly for cement manufacture, per tonne).
The input-output model was decomposed into direct energy paths for the residential building construction sector. It was shown that 592 direct energy paths were required to describe 90% of the overall total energy intensity for residential building construction. By extracting direct energy paths using yet smaller threshold values, they were shown to be mutually exclusive. Consequently, the modification of direct energy paths using process analysis data does not cause unwanted indirect effects.
A non-standard individual residential building was then selected to demonstrate the benefits of the new input-output-based hybrid analysis method in cases where the products of a sector may not be similar. Particular direct energy paths were modified with case specific process analysis data. Product quantities and direct energy intensities were derived and used to modify some of the direct energy paths. The intention of this demonstration was to determine whether 90% of the total embodied energy calculated for the building could comprise the process analysis data normally collected for the building. However, it was found that only 51% of the total comprised normally collected process analysis. The integration of process analysis data with 90% of the direct energy paths by value was unsuccessful because:
typically only one of the direct energy path components was modified using process analysis data (ie, either the product quantity or the direct energy intensity);
of the complexity of the paths derived for residential building construction; and
of the lack of reliable and consistent process analysis data from industry, for both product quantities and direct energy intensities.
While the input-output model used was the best available for Australia, many errors were likely to be carried through to the direct energy paths for residential building construction. Consequently, both the value and relative importance of the direct energy paths for residential building construction were generally found to be a poor model for the demonstration building. This was expected. Nevertheless, in the absence of better data from industry, the input-output data is likely to remain the most appropriate for completing the framework of embodied energy analyses of many types of productseven in non-standard cases.
Residential building construction was one of the 22 most complex Australian economic sectors (ie, comprising those requiring between 592 and 3215 direct energy paths to describe 90% of their total energy intensities). Consequently, for the other 87 non-energy sectors of the Australian economy, the input-output-based hybrid analysis method is likely to produce more reliable results than those calculated for the demonstration building using the direct energy paths for residential building construction.
For more complex sectors than residential building construction, the new input-output-based hybrid analysis method derived here allows available process analysis data to be integrated with the input-output data in a comprehensive framework. The proportion of the result comprising the more reliable process analysis data can be calculated and used as a measure of the reliability of the result for that product or part of the product being analysed (for example, a building material or component).
To ensure that future applications of the new input-output-based hybrid analysis method produce reliable results, new sources of process analysis data are required, including for such processes as services (for example, banking) and processes involving the transformation of basic materials into complex products (for example, steel and copper into an electric motor).
However, even considering the limitations of the demonstration described above, the new input-output-based hybrid analysis method developed achieved the aim of the thesis: to develop a new embodied energy analysis method that allows reliable process analysis data to be integrated into the comprehensive, yet unreliable, input-output framework.
Plain language summary
Embodied energy analysis comprises the assessment of the direct and indirect energy requirements associated with a process. For example, the construction of a building requires the manufacture of steel structural members, and thus indirectly requires the energy used directly and indirectly in their manufacture. Embodied energy is an important measure of ecological sustainability because energy is used in virtually every human activity and many of these activities are interrelated.
This thesis is concerned with the relationship between the completeness of embodied energy analysis methods and their reliability. However, previous industry-based methods, while reliable, are incomplete. Previous national statistical methods, while comprehensive, are a black box subject to errors.
A new method is derived, involving the decomposition of the comprehensive national statistical model into components that can be modified discretely using the more reliable industry data, and is demonstrated for an individual building. The demonstration failed to integrate enough industry data into the national statistical model, due to the unexpected complexity of the national statistical data and the lack of available industry data regarding energy and non-energy product requirements.
These unique findings highlight the flaws in previous methods. Reliable process analysis and input-output data are required, particularly for those processes that were unable to be examined in the demonstration of the new embodied energy analysis method. This includes the energy requirements of services sectors, such as banking, and processes involving the transformation of basic materials into complex products, such as refrigerators. The application of the new method to less complex products, such as individual building materials or components, is likely to be more successful than to the residential building demonstration.
|
157 |
Self-healing RF SoCs: low cost built-in test and control driven simultaneous tuning of multiple performance metricsNatarajan, Vishwanath 13 October 2010 (has links)
The advent of deep submicron technology coupled with ever increasing demands from the customer for more functionality on a compact silicon real estate has led to a proliferation of highly complex integrated RF system-on-chip (SoC) and system-on-insulator (SoI) solutions. The use of scaled CMOS technologies for high frequency wireless applications is posing daunting technological challenges both in design and manufacturing test.
To ensure market success, manufacturers need to ensure the quality of these advanced RF devices by subjecting them to a conventional set of production test routines that are both time consuming and expensive. Typically the devices are tested for parametric specifications such as gain, linearity metrics, quadrature mismatches, phase noise, noise figure (NF) and end-to-end system level specifications such as EVM (error vector magnitude), BER (bit-error-rate) etc. Due to the reduced visibility imposed by high levels of integration, testing for parametric specifications are becoming more and more complex.
To offset the yield loss resulting from process variability effects and reliability issues in RF circuits, the use of self-healing/self-tuning mechanisms will be imperative. Such self-healing is typically implemented as a test/self-test and self-tune procedure and is applied post-manufacture. To enable this, simple test routines that can accurately diagnose complex performance parameters of the RF circuits need to be developed first. After diagnosing the performance of a complex RF system appropriate compensation techniques need to be developed to increase or restore the system performance. Moreover, the test, diagnosis and compensation approach should be low-cost with minimal hardware and software overhead to ensure that the final product is economically viable for the manufacturer.
The main components of the thesis are as follows:
1) Low-cost specification testing of advanced radio frequency front-ends:
Methodologies are developed to address the issue of test cost and test time associated with conventional production testing of advanced RF front-ends. The developed methodologies are amenable for performing self healing of RF SoCs. Test generation algorithms are developed to perform alternate test stimulus generation that includes the artifacts of test signal path such as response capture accuracy, load-board DfT etc. A novel cross loop-back methodology is developed to perform low cost system level specification testing of multi-band RF transceivers. A novel low-cost EVM testing approach is developed for production testing of wireless 802.11 OFDM front-ends. A signal transformation based model extraction technique is developed to compute multiple RF system level specifications of wireless front-ends from a single data capture. The developed techniques are low-cost and facilitate a reduction in the overall contribution of test cost towards the manufacturing cost of advanced wireless products.
2)Analog tuning methodologies for compensating wireless RF front ends:
Methodologies for performing low-cost self tuning of multiple impairments of wireless RF devices are developed. This research considers for the first time, multiple analog tuning parameters of a complete RF transceiver system (transmitter and receiver) for tuning purposes. The developed techniques are demonstrated on hardware components and behavioral models to improve the overall yield of integrated RF SoCs.
|
158 |
Graybox-baserade säkerhetstest : Att kostnadseffektivt simulera illasinnade angreppLinnér, Samuel January 2008 (has links)
<p>Att genomföra ett penetrationstest av en nätverksarkitektur är komplicerat, riskfyllt och omfattande. Denna rapport utforskar hur en konsult bäst genomför ett internt penetrationstest tidseffektivt, utan att utelämna viktiga delar. I ett internt penetrationstest får konsulten ofta ta del av systemdokumentation för att skaffa sig en bild av nätverksarkitekturen, på så sätt elimineras den tid det tar att kartlägga hela nätverket manuellt. Detta medför även att eventuella anomalier i systemdokumentationen kan identifieras. Kommunikation med driftansvariga under testets gång minskar risken för missförstånd och systemkrascher. Om allvarliga sårbarheter identifieras meddelas driftpersonalen omgå-ende. Ett annat sätt att effektivisera testet är att skippa tidskrävande uppgifter som kommer att lyckas förr eller senare, t.ex. lösenordsknäckning, och istället påpeka att orsaken till sårbarheten är att angriparen har möjlighet att testa lösenord obegränsat antal gånger. Därutöver är det lämpligt att simulera vissa attacker som annars kan störa produktionen om testet genomförs i en driftsatt miljö.</p><p>Resultatet av rapporten är en checklista som kan tolkas som en generell metodik för hur ett internt penetrationstest kan genomföras. Checklistans syfte är att underlätta vid genomförande av ett test. Processen består av sju steg: förberedelse och planering, in-formationsinsamling, sårbarhetsdetektering och analys, rättighetseskalering, penetrationstest samt summering och rapportering.</p> / <p>A network architecture penetration test is complicated, full of risks and extensive. This report explores how a consultant carries it out in the most time effective way, without overlook important parts. In an internal penetration test the consultant are often allowed to view the system documentation of the network architecture, which saves a lot of time since no total host discovery is needed. This is also good for discovering anomalies in the system documentation. Communication with system administrators during the test minimizes the risk of misunderstanding and system crashes. If serious vulnerabilities are discovered, the system administrators have to be informed immediately. Another way to make the test more effective is to skip time consuming tasks which will succeed sooner or later, e.g. password cracking, instead; point out that the reason of the vulnerability is the ability to brute force the password. It is also appropriate to simulate attacks which otherwise could infect the production of the organization.</p><p>The result of the report is a checklist by means of a general methodology of how in-ternal penetration tests could be implemented. The purpose of the checklist is to make it easier to do internal penetration tests. The process is divided in seven steps: Planning, information gathering, vulnerability detection and analysis, privilege escalation, pene-tration test and final reporting.</p>
|
159 |
Funkcinių testų skaitmeniniams įrenginiams projektavimas ir analizė / Design and analysis of functional tests for digital devicesNarvilas, Rolandas 31 August 2011 (has links)
Projekto tikslas – sukurti sistemą, skirtą schemų testinių atvejų atrinkimui naudojant „juodos dėžės“ modelius ir jiems pritaikytus gedimų modelius. Vykdant projektą buvo atlikta kūrino būdų ir technologijų analizė. Sistemos architektūra buvo kuriama atsižvelgiant į reikalavimą, naudoti schemų modelius, kurie yra parašyti c programavimo kalba. Buvo atlikta schemų failų integravimo efektyvumo analizė, tiriamos atsitiktinio testinių atvejų generavimo sekos patobulinimo galimybės, "1" pasiskirstymo 5taka atsitiktinai generuojam7 testini7 atvej7 kokybei. Tyrim7 rezultatai: • Schemų modelių integracijos tipas mažai įtakoja sistemos darbą. • Pusiau deterministinių metodų taikymas parodė, jog atskirų žingsnių optimizacija nepagerina galutinio rezultato. • "1" pasiskirstymas atsitiktinai generuojamose sekose turi įtaką testo kokybei ir gali būti naudojamas testų procesų pagerinimui. / Project objective – to develop a system, which generates functional tests for non-scan synchronous sequential circuits based on functional delay models. During project execution, the analysis of design and technology solutions was performed. The architecture of the developed software is based on the requirement to be able to use the models of the benchmark circuits that are written in C programming language. Analysis of the effectiveness of the model file integration, possibilities of improving random test sequence generation and the influence of distribution of „1“ in randomly generated test patterns was performed. The results of the analysis were: • Type of the model file integration has little effect when using large circuit models. • The implementation of semi deterministic algorithms showed that the optimisation of separate steps by construction of test subsequences doesn’t improve the final outcome. • The distribution of „1“ in randomly generated test patterns has effect on the fault coverage and can be used to improve test generation process.
|
160 |
Metamodeling strategies for high-dimensional simulation-based design problemsShan, Songqing 13 October 2010 (has links)
Computational tools such as finite element analysis and simulation are commonly used for system performance analysis and validation. It is often impractical to rely exclusively on the high-fidelity simulation model for design activities because of high computational costs. Mathematical models are typically constructed to approximate the simulation model to help with the design activities. Such models are referred to as “metamodel.” The process of constructing a metamodel is called “metamodeling.”
Metamodeling, however, faces eminent challenges that arise from high-dimensionality of underlying problems, in addition to the high computational costs and unknown function properties (that is black-box functions) of analysis/simulation. The combination of these three challenges defines the so-called high-dimensional, computationally-expensive, and black-box (HEB) problems. Currently there is a lack of practical methods to deal with HEB problems.
This dissertation, by means of surveying existing techniques, has found that the major deficiency of the current metamodeling approaches lies in the separation of the metamodeling from the properties of underlying functions. The survey has also identified two promising approaches - mapping and decomposition - for solving HEB problems. A new analytic methodology, radial basis function–high-dimensional model representation (RBF-HDMR), has been proposed to model the HEB problems. The RBF-HDMR decomposes the effects of variables or variable sets on system outputs. The RBF-HDMR, as compared with other metamodels, has three distinct advantages: 1) fundamentally reduces the number of calls to the expensive simulation in order to build a metamodel, thus breaks/alleviates exponentially-increasing computational difficulty; 2) reveals the functional form of the black-box function; and 3) discloses the intrinsic characteristics (for instance, linearity/nonlinearity) of the black-box function.
The RBF-HDMR has been intensively tested with mathematical and practical problems chosen from the literature. This methodology has also successfully applied to the power transfer capability analysis of Manitoba-Ontario Electrical Interconnections with 50 variables. The test results demonstrate that the RBF-HDMR is a powerful tool to model large-scale simulation-based engineering problems. The RBF-HDMR model and its constructing approach, therefore, represent a breakthrough in modeling HEB problems and make it possible to optimize high-dimensional simulation-based design problems.
|
Page generated in 1.1897 seconds