• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 106
  • 24
  • 9
  • 7
  • 6
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 183
  • 183
  • 31
  • 29
  • 28
  • 27
  • 26
  • 26
  • 23
  • 20
  • 19
  • 18
  • 17
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Applications de la théorie de l'information à l'apprentissage statistique / Applications of Information Theory to Machine Learning

Bensadon, Jérémy 02 February 2016 (has links)
On considère ici deux sujets différents, en utilisant des idées issues de la théorie de l'information : 1) Context Tree Weighting est un algorithme de compression de texte qui calcule exactement une prédiction Bayésienne qui considère tous les modèles markoviens visibles : on construit un "arbre de contextes", dont les nœuds profonds correspondent aux modèles complexes, et la prédiction est calculée récursivement à partir des feuilles. On étend cette idée à un contexte plus général qui comprend également l'estimation de densité et la régression, puis on montre qu'il est intéressant de remplacer les mixtures Bayésiennes par du "switch", ce qui revient à considérer a priori des suites de modèles plutôt que de simples modèles. 2) Information Geometric Optimization (IGO) est un cadre général permettant de décrire plusieurs algorithmes d'optimisation boîte noire, par exemple CMA-ES et xNES. On transforme le problème initial en un problème d'optimisation d'une fonction lisse sur une variété Riemannienne, ce qui permet d'obtenir une équation différentielle du premier ordre invariante par reparamétrage. En pratique, il faut discrétiser cette équation, et l'invariance n'est plus valable qu'au premier ordre. On définit l'algorithme IGO géodésique (GIGO), qui utilise la structure de variété Riemannienne mentionnée ci-dessus pour obtenir un algorithme totalement invariant par reparamétrage. Grâce au théorème de Noether, on obtient facilement une équation différentielle du premier ordre satisfaite par les géodésiques de la variété statistique des gaussiennes, ce qui permet d'implémenter GIGO. On montre enfin que xNES et GIGO sont différents dans le cas général, mais qu'il est possible de définir un nouvel algorithme presque invariant par reparamétrage, GIGO par blocs, qui correspond exactement à xNES dans le cas Gaussien. / We study two different topics, using insight from information theory in both cases: 1) Context Tree Weighting is a text compression algorithm that efficiently computes the Bayesian combination of all visible Markov models: we build a "context tree", with deeper nodes corresponding to more complex models, and the mixture is computed recursively, starting with the leaves. We extend this idea to a more general context, also encompassing density estimation and regression; and we investigate the benefits of replacing regular Bayesian inference with switch distributions, which put a prior on sequences of models instead of models. 2) Information Geometric Optimization (IGO) is a general framework for black box optimization that recovers several state of the art algorithms, such as CMA-ES and xNES. The initial problem is transferred to a Riemannian manifold, yielding parametrization-invariant first order differential equation. However, since in practice, time is discretized, this invariance only holds up to first order. We introduce the Geodesic IGO (GIGO) update, which uses this Riemannian manifold structure to define a fully parametrization invariant algorithm. Thanks to Noether's theorem, we obtain a first order differential equation satisfied by the geodesics of the statistical manifold of Gaussians, thus allowing to compute the corresponding GIGO update. Finally, we show that while GIGO and xNES are different in general, it is possible to define a new "almost parametrization-invariant" algorithm, Blockwise GIGO, that recovers xNES from abstract principles.
132

Sledovaní současného stavu testovacích technik ve vybrané společnosti / Testing Techniques in Continuous Integration System

Shpak, Yuliia January 2020 (has links)
S vývojem informací, komunikací a technologií se moderní průmyslové řídicí systémy (ICS) potýkají čím dál více s otázkami automatického testováni pro zabezpečení stability a bezpečnosti systému. Z tohoto důvodu se testování stalo jednou z nejdůležitějších částí životního cyklu všech softwarů. V této diplomové práci budu zvažovat možnost využití stávajících zkušebních metod a nástrojů pro získání dostatečné jakosti a bezpečnosti softwaru v kontinuálních integračních systémech.
133

StockholmsOperan, Galärvarvet : En smal sak

Liljeqvist, Björn January 2011 (has links)
Galärvarvet är på många sätt en utmärkt plats för en ny och framträdande kulturbyggnad i Stockholm. Samtidigt utgöt operans komplexa och omfattande program en stor utmaning. I detta projekt löses konflikten mellan program och plats meddels en långsmal byggnad som bevarar naturvärden och rörelsemönster. / Galarvarvet is in many ways an great site for a new and prominent cultural building in Stockholm. At the same time the complexity and size of the program of an opera poses a great challenge. In this project the conflict between program and site is solved with a stretched out building that preserves environmental values and movements.
134

Development of grey-box models for simulating heating consumption in buildings : A study applying system identification methodology to a physics-based framework

Klockar, Zack January 2021 (has links)
This study models the energy used for heating in buildings by applying system identification methodology. The model development is grounded in physics to provide guidance and interpretability when evaluated. Time-series of heating demand, outdoor temperature, indoor temperature and solar irradiation are considered for the modelling purpose. Evaluation is done through simulation and relies on quantitative measures, residual analysis and visual inspection of model output. Through model development, the study seeks to extrapolate information of physical properties that drives heating demand in buildings. Seven buildings located in the same geographic area are studied. It is found that linear ARX-models can simulate heating demand with high precision but at times low accuracy. A common system model structure can be identified between buildings, indicating that physical properties shared between buildings can be identified through this methodology. A sensitivity analysis is conducted to derive the contributions from model constituents to simulation results. Two buildings were also modelled as OE-models. These models performed better than the respective ARX-models but were deemed more difficult to use for the purpose of this study. The study finds difficulties in implementing aggregated time-series of indoor temperature, which could be explored further in future studies for more detailed interpretations.
135

Fault detection for the Benfield process using a closed-loop subspace re-identification approach

Maree, Johannes Philippus 26 November 2009 (has links)
Closed-loop system identification and fault detection and isolation are the two fundamental building blocks of process monitoring. Efficient and accurate process monitoring increases plant availability and utilisation. This dissertation investigates a subspace system identification and fault detection methodology for the Benfield process, used by Sasol, Synfuels in Secunda, South Africa, to remove CO2 from CO2-rich tail gas. Subspace identification methods originated between system theory, geometry and numerical linear algebra which makes it a computationally efficient tool to estimate system parameters. Subspace identification methods are classified as Black-Box identification techniques, where it does not rely on a-priori process information and estimates the process model structure and order automatically. Typical subspace identification algorithms use non-parsimonious model formulation, with extra terms in the model that appear to be non-causal (stochastic noise components). These extra terms are included to conveniently perform subspace projection, but are the cause for inflated variance in the estimates, and partially responsible for the loss of closed-loop identifiably. The subspace identification methodology proposed in this dissertation incorporates two successive LQ decompositions to remove stochastic components and obtain state-space models of the plant respectively. The stability of the identified plant is further guaranteed by using the shift invariant property of the extended observability matrix by appending the shifted extended observability matrix by a block of zeros. It is shown that the spectral radius of the identified system matrices all lies within a unit boundary, when the system matrices are derived from the newly appended extended observability matrix. The proposed subspace identification methodology is validated and verified by re-identifying the Benfield process operating in closed-loop, with an RMPCT controller, using measured closed-loop process data. Models that have been identified from data measured from the Benfield process operating in closed-loop with an RMPCT controller produced validation data fits of 65% and higher. From residual analysis results, it was concluded that the proposed subspace identification method produce models that are accurate in predicting future outputs and represent a wide variety of process inputs. A parametric fault detection methodology is proposed that monitors the estimated system parameters as identified from the subspace identification methodology. The fault detection methodology is based on the monitoring of parameter discrepancies, where sporadic parameter deviations will be detected as faults. Extended Kalman filter theory is implemented to estimate system parameters, instead of system states, as new process data becomes readily available. The extended Kalman filter needs accurate initial parameter estimates and is thus periodically updated by the subspace identification methodology, as a new set of more accurate parameters have been identified. The proposed fault detection methodology is validated and verified by monitoring process behaviour of the Benfield process. Faults that were monitored for, and detected include foaming, flooding and sensor faults. Initial process parameters as identified from the subspace method can be tracked efficiently by using an extended Kalman filter. This enables the fault detection methodology to identify process parameter deviations, with a process parameter deviation sensitivity of 2% or higher. This means that a 2% parameter deviation will be detected which greatly enhances the fault detection efficiency and sensitivity. / Dissertation (MEng)--University of Pretoria, 2008. / Electrical, Electronic and Computer Engineering / unrestricted
136

Approaches for Automated Software Security Evaluations

Poller, Andreas 23 October 2006 (has links)
As a consequence of the highly increasing cross-linking of computer systems in computer networks, the possibilities for accessing programs operated at these machines is becoming more and more independent from the possibilities of having physical access to them. Thus the former existing physical access controls have to be replaced by logical access controls which ensure that computer systems are only used for the intended purpose and that the stored data are handled securely and confidentially. The efficiency of such logical protection mechanism is verified by applying software security tests. During such tests it is proved whether security functions can be bypassed especially by exploiting software errors. In this diploma thesis approaches for the automation of software security tests are examined regarding their effectiveness and applicability. The results are used to introduce a requirement and evaluation model for the qualitative analysis of such security evaluation automation approaches. Additionally, the assertion is made that a highly automated software security evaluation is not a sensible development goal referring to the estimated cost-benefit ratio which is gained by trying to realise this goal. Based on this assertion, this diploma thesis discusses how to join the capabilities of a human tester and a software evaluation assistance system in an efficient test process. Based on this considerations, the design and implementation of a software security evaluation system which has been developed prototypically for this diploma thesis is described. This system significantly involves the human tester in the evaluation process but provides approaches for automation where possible. Furthermore this proof-of-concept prototype is evaluated regarding its practical applicability. / Durch die zunehmende starke Vernetzung von Computertechnologie wird die Möglichkeit des Zugriffs auf einzelne Computersysteme und den darauf ablaufenden Programmen zunehmend ebenso stark unabhängig von den physischen Zugangsmöglichkeiten des Zugreifenden zu diesen Systemen. Diese wegfallenden physischen Zugangsbarrieren müssen deshalb durch logische Zugriffsbeschränkungen ersetzt werden, die sicherstellen, dass Computersysteme nur zu den vorgesehen Zwecken verwendet und die darin gespeicherten Daten sicher und vertraulich verarbeitet werden. Die Wirksamkeit dieser logischen Schutzmechanismen wird mit Hilfe von s.g. Softwaresicherheitstests verifiziert. Dabei wird insbesondere überprüft, inwiefern Schutzfunktionen durch Zuhilfenahme von in der Software vorhandenen Programmfehlern umgangen werden können. Diese Diplomarbeit überprüft bestehende Ansätze für die Automatisierung solcher Sicherheitstests hinsichtlich ihrer Wirksamkeit und Anwendbarkeit. Aus den Resultaten dieser Untersuchung wird ein allgemeines Anforderungs- und Bewertungsmodell entwickelt, welches die qualitative Bewertung von Ansätzen zur Sicherheitstestautomatisierung zulässt. Desweiteren wird die Behauptung aufgestellt, dass die Forderung nach einer zu starken Automatisierung des Testverfahrens sich ungünstig gegenüber des Kosten-Nutzen-Verhältnisses auswirkt, welches bei der Realisierung dieser Forderungen zu erwarten ist. Darauf aufbauend versucht die Diplomarbeit abzugrenzen, wie sich die Fähigkeiten des menschlichen Testers und eines teilautomatisierbaren Softwaresystems effizient in einem Sicherheitstestprozess verbinden lassen. Basierend auf diesen Betrachtungen wird beschrieben, wie ein Sicherheitsevaluierungssystem, welches prototypisch für diese Diplomarbeit entwickelt wurde, den Menschen zur Erstellung von Testalgorithmen einbindet aber, wenn dies zweckmäßig ist, Automatisierungen ermöglicht. Dieses System wird daraufhin auf seine praktische Anwendbarkeit untersucht.
137

Rätten till algoritmisk förståelse : Om insyn i automatiserat beslutsfattande inom offentlig förvaltning / The right to algorithmic understanding : Transparency in automated decision making within public administration

Nygårds, Stina January 2022 (has links)
Automated decision making (ADM) is not a new phenomenon within public administration. Millions of automated decisions are made each year, including matters on parental benefits, social security benefits and income tax. ADM is used increasingly within both government agencies and municipal agencies. In light of this development, it becomes vital to have legal mechanisms in place which ensure transparency in automated decision making. This is to make sure that decisions are correct and that the systems work in compliance with procedural safeguards such as objectivity and impartiality. This begs the question of whether new legal solutions are needed to secure transparency of automated decisions. Current ADM systems within public administration are based on static algorithms that cannot change unless they are programmed to do so. Robotic Process Automation (RPA) is such a technology, commonly used by municipal agencies. Given the development within artificial intelligence (AI), ADM systems based on AI might be the next step. Then again, taking into account the extensive problems with the non-transparency of the “black box” of AI makes this technology rather unfit for decision making within public administration. The legal system provides different ways for citizens to obtain insight into and control of public administration. The principle of public access to information provides a right for all to access official documents. The Administrative Procedure Act (2017:900) provides access to information for the party concerned with a matter and the General Data Protection Regulation (GDPR) of the EU gives a right to information for data subjects. These rights to information can be restricted, for example by provisions on secrecy. However, the legal rights to information do not secure a comprehensive or direct transparency of the workings of ADM systems within public administration. The kind of transparency provided is rather a transparency by proxy, an indirect transparency that substitutes for a direct and functional one. For example, the provisions of the Administrative Procedure Act are not adapted to ADM, and the application of the right to access official documents entails inconsistencies. If information is provided to a citizen, there is no guarantee of understanding for example a description of an algorithm. To create a direct and functional transparency of automated decisions and the workings of ADM systems, it is vital for law and technology to evolve in interplay. A general discussion of public access to ADM systems and their implications on public trust and the rule of law is also of the essence.
138

A Study on Behaviors of Machine Learning-Powered Intrusion Detection Systems under Normal and Adversarial Settings

Pujari, Medha Rani 15 June 2023 (has links)
No description available.
139

GENERATIVE MODELS WITH MARGINAL CONSTRAINTS

Bingjing Tang (16380291) 16 June 2023 (has links)
<p> Generative models form powerful tools for learning data distributions and simulating new samples. Recent years have seen significant advances in the flexibility and applicability of such models, with Bayesian approaches like nonparametric Bayesian models and deep neural network models such as Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) finding use in a wide range of domains. However, the black-box nature of these models means that they are often hard to interpret, and they often come with modeling implications that are inconsistent with side knowledge resulting from domain knowledge. This thesis studies situations where the modeler has side knowledge represented as probability distributions on functionals of the objects being modeled, and we study methods to incorporate this particular kind of side knowledge into flexible generative models. This dissertation covers three main parts. </p> <p><br></p> <p>The first part focuses on incorporating a special case of the aforementioned side knowledge into flexible nonparametric Bayesian models. Many times, practitioners have additional distributional information about a subset of the coordinates of the observations being modeled. The flexibility of nonparametric Bayesian models usually implies incompatibility with this side information. Such inconsistency triggers the necessity of developing methods to incorporate this side knowledge into flexible nonparametric Bayesian models. We design a specialized generative process to build in this side knowledge and propose a novel sigmoid Gaussian process conditional model. We also develop a corresponding posterior sampling method based on data augmentation to overcome a doubly intractable problem. We illustrate the efficacy of our proposed constrained nonparametric Bayesian model in a variety of real-world scenarios including modeling environmental and earthquake data. </p> <p><br></p> <p>The second part of the dissertation discusses neural network approaches to satisfying the said general side knowledge. Further, the generative models considered in this part broaden into black-box models. We formulate this side knowledge incorporation problem as a constrained divergence minimization problem and propose two scalable neural network approaches as its solution. We demonstrate their practicality using various synthetic and real examples. </p> <p><br></p> <p> The third part of the dissertation concentrates on a specific generative model of individual pixels of the fMRI data constructed from a latent group image. Usually there is two-fold side knowledge about the latent group image: spatial structure and partial activation zones. The former can be captured by modeling the prior for the group image with Markov random fields. The latter, which is often obtained from previous related studies, is left for future research. We propose a novel Bayesian model with Markov random fields and aim to estimate the maximum a posteriori for the group image. We also derive a variational Bayes algorithm to overcome local optima in the optimization.</p>
140

MULTI-FIDELITY MODELING AND MULTI-OBJECTIVE BAYESIAN OPTIMIZATION SUPPORTED BY COMPOSITIONS OF GAUSSIAN PROCESSES

Homero Santiago Valladares Guerra (15383687) 01 May 2023 (has links)
<p>Practical design problems in engineering and science involve the evaluation of expensive black-box functions, the optimization of multiple—often conflicting—targets, and the integration of data generated by multiple sources of information, e.g., numerical models with different levels of fidelity. If not properly handled, the complexity of these design problems can lead to lengthy and costly development cycles. In the last years, Bayesian optimization has emerged as a powerful alternative to solve optimization problems that involve the evaluation of expensive black-box functions. Bayesian optimization has two main components: a probabilistic surrogate model of the black-box function and an acquisition function that drives the optimization. Its ability to find high-performance designs within a limited number of function evaluations has attracted the attention of many fields including the engineering design community. The practical relevance of strategies with the ability to fuse information emerging from different sources and the need to optimize multiple targets has motivated the development of multi-fidelity modeling techniques and multi-objective Bayesian optimization methods. A key component in the vast majority of these methods is the Gaussian process (GP) due to its flexibility and mathematical properties.</p> <p><br></p> <p>The objective of this dissertation is to develop new approaches in the areas of multi-fidelity modeling and multi-objective Bayesian optimization. To achieve this goal, this study explores the use of linear and non-linear compositions of GPs to build probabilistic models for Bayesian optimization. Additionally, motivated by the rationale behind well-established multi-objective methods, this study presents a novel acquisition function to solve multi-objective optimization problems in a Bayesian framework. This dissertation presents four contributions. First, the auto-regressive model, one of the most prominent multi-fidelity models in engineering design, is extended to include informative mean functions that capture prior knowledge about the global trend of the sources. This additional information enhances the predictive capabilities of the surrogate. Second, the non-linear auto-regressive Gaussian process (NARGP) model, a non-linear multi-fidelity model, is integrated into a multi-objective Bayesian optimization framework. The NARGP model offers the possibility to leverage sources that present non-linear cross-correlations to enhance the performance of the optimization process. Third, GP classifiers, which employ non-linear compositions of GPs, and conditional probabilities are combined to solve multi-objective problems. Finally, a new multi-objective acquisition function is presented. This function employs two terms: a distance-based metric—the expected Pareto distance change—that captures the optimality of a given design, and a diversity index that prevents the evaluation of non-informative designs. The proposed acquisition function generates informative landscapes that produce Pareto front approximations that are both broad and diverse.</p>

Page generated in 0.0328 seconds