• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 85
  • 64
  • 31
  • 29
  • 18
  • 18
  • 8
  • 7
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 306
  • 49
  • 48
  • 36
  • 31
  • 27
  • 23
  • 22
  • 22
  • 20
  • 20
  • 19
  • 19
  • 19
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Theoretical and field studies of fluid flow in fractured rocks

Hsieh, P. A.(Paul A.) January 1983 (has links)
A comprehensive methodology of hydraulic testing in fractured rocks is presented. The methodology utilizes geological and geophysical information as background. It consists of conventional single-hole packer tests in conjection with a newly developed cross-hole packer test. The cross-hole method involves injecting fluid into a packed-off interval in one borehole and monitoring hydraulic head variations in packed-off intervals in neighboring boreholes. Borehole orientation is unrelated to the principal hydraulic conductivity directions which, therefore, need not be known a priori. The method yields complete information about the directional nature of hydraulic conductivity in three dimensions on a scale comparable to the distance between the test boreholes. In addition to providing all six components of the hydraulic conductivity tensor, the cross-hole method also yields the specific storage of the fractured rock mass. While the theory behind this method treats the rock as a homogeneous, anisotropic, porous medium, the test provides detailed information about the degree to which such assumptions may actually be vaild in the field. The method may also be useful as a tool for detecting, in the vicinity of the test area, major fractures or faults that have not been intercepted by boreholes. Preliminary results from a granitic site near Oracle in southern Arizona are presented together with details of the instrumentation designed and constructed specifically for that site.
112

Integrace webové služby ISKaM s Centrální databází VUT pro potřeby účtování mikropoplatků / Integration of ISKaM Web Service and BUT Central Database for Billing of Micropayments

Studený, Stanislav Unknown Date (has links)
This work deals with consume of webservice provided by Information system of Dormitories and Refectories at BUT, through BUT's Central database environment. Thesis discuses some Java frameworks used to consume webservices. It also discuses design and implementation of micropayements billing interface based on chosen framework with respect to maxiumum security.
113

Quantification vectorielle en grande dimension : vitesses de convergence et sélection de variables / High-dimensional vector quantization : convergence rates and variable selection

Levrard, Clément 30 September 2014 (has links)
Ce manuscrit étudie dans un premier temps la dépendance de la distorsion, ou erreur en quantification, du quantificateur construit à partir d'un n-échantillon d'une distribution de probabilité via l'algorithme des k-means. Plus précisément, l'objectif de ce travail est de donner des bornes en probabilité sur l'écart entre la distorsion de ce quantificateur et la plus petite distorsion atteignable parmi les quantificateurs, à nombre d'images k fixé, décrivant l'influence des divers paramètres de ce problème: support de la distribution de probabilité à quantifier, nombre d'images k, dimension de l'espace vectoriel sous-jacent, et taille de l'échantillon servant à construire le quantificateur k-mean. Après un bref rappel des résultats précédents, cette étude établit l'équivalence des diverses conditions existantes pour établir une vitesse de convergence rapide en la taille de l'échantillon de l'écart de distorsion considéré, dans le cas des distributions à densité, à une condition technique ressemblant aux conditions requises en classification supervisée pour l'obtention de vitesses rapides de convergence. Il est ensuite prouvé que, sous cette condition technique, une vitesse de convergence de l'ordre de 1/n pouvait être atteinte en espérance. Ensuite, cette thèse énonce une condition facilement interprétable, appelée condition de marge, suffisante à la satisfaction de la condition technique établie précédemment. Plusieurs exemples classiques de distributions satisfaisant cette condition sont donnés, tels les mélanges gaussiens. Si cette condition de marge se trouve satisfaite, une description précise de la dépendance de l'écart de distorsion étudié peut être donné via une borne en espérance: la taille de l'échantillon intervient via un facteur 1/n, le nombre d'images k intervient via différentes quantités géométriques associées à la distribution à quantifier, et de manière étonnante la dimension de l'espace sous-jacent semble ne jouer aucun rôle. Ce dernier point nous a permis d'étendre nos résultats au cadre des espaces de Hilbert, propice à la quantification des courbes. Néanmoins, la quantification effective en grande dimension nécessite souvent en pratique une étape de réduction du nombre de variables, ce qui nous a conduit dans un deuxième temps à étudier une procédure de sélection de variables associée à la quantification. Plus précisément, nous nous sommes intéressés à une procédure de type Lasso adaptée au cadre de la quantification vectorielle, où la pénalité Lasso porte sur l'ensemble des points images du quantificateur, dans le but d'obtenir des points images parcimonieux. Si la condition de marge introduite précédemment est satisfaite, plusieurs garanties théoriques sont établies concernant le quantificateur issu d'une telle procédure, appelé quantificateur Lasso k-means, à savoir que les points images de ce quantificateur sont proches des points images d'un quantificateur naturellement parcimonieux, réalisant un compromis entre erreur en quantification et taille du support des points images, et que l'écart en distorsion du quantificateur Lasso k-means est de l'ordre de 1/n^(1/2) en la taille de l'échantillon. Par ailleurs la dépendance de cette distorsion en les différents autres paramètres de ce problème est donnée explicitement. Ces prédictions théoriques sont illustrées par des simulations numériques confirmant globalement les propriétés attendues d'un tel quantificateur parcimonieux, mais soulignant néanmoins quelques inconvénients liés à l'implémentation effective de cette procédure. / The distortion of the quantizer built from a n-sample of a probability distribution over a vector space with the famous k-means algorithm is firstly studied in this thesis report. To be more precise, this report aims to give oracle inequalities on the difference between the distortion of the k-means quantizer and the minimum distortion achievable by a k-point quantizer, where the influence of the natural parameters of the quantization issue should be precisely described. For instance, some natural parameters are the distribution support, the size k of the quantizer set of images, the dimension of the underlying Euclidean space, and the sample size n. After a brief summary of the previous works on this topic, an equivalence between the conditions previously stated for the excess distortion to decrease fast with respect to the sample size and a technical condition is stated, in the continuous density case. Interestingly, this condition looks like a technical condition required in statistical learning to achieve fast rates of convergence. Then, it is proved that the excess distortion achieves a fast convergence rate of 1/n in expectation, provided that this technical condition is satisfied. Next, a so-called margin condition is introduced, which is easier to understand, and it is established that this margin condition implies the technical condition mentioned above. Some examples of distributions satisfying this margin condition are exposed, such as the Gaussian mixtures, which are classical distributions in the clustering framework. Then, provided that this margin condition is satisfied, an oracle inequality on the excess distortion of the k-means quantizer is given. This convergence result shows that the excess distortion decreases with a rate 1/n and depends on natural geometric properties of the probability distribution with respect to the size of the set of images k. Suprisingly the dimension of the underlying Euclidean space seems to play no role in the convergence rate of the distortion. Following the latter point, the results are directly extended to the case where the underlying space is a Hilbert space, which is the adapted framework when dealing with curve quantization. However, high-dimensional quantization often needs in practical a dimension reduction step, before proceeding to a quantization algorithm. This motivates the following study of a variable selection procedure adapted to the quantization issue. To be more precise, a Lasso type procedure adapted to the quantization framework is studied. The Lasso type penalty applies to the set of image points of the quantizer, in order to obtain sparse image points. The outcome of this procedure is called the Lasso k-means quantizer, and some theoretical results on this quantizer are established, under the margin condition introduced above. First it is proved that the image points of such a quantizer are close to the image points of a sparse quantizer, achieving a kind of tradeoff between excess distortion and size of the support of image points. Then an oracle inequality on the excess distortion of the Lasso k-means quantizer is given, providing a convergence rate of 1/n^(1/2) in expectation. Moreover, the dependency of this convergence rate on different other parameters is precisely described. These theoretical predictions are illustrated with numerical experimentations, showing that the Lasso k-means procedure mainly behaves as expected. However, the numerical experimentations also shed light on some drawbacks concerning the practical implementation of such an algorithm.
114

Att skapa en fleranvändarmiljö : En kvalitativ fallstudie som undersöker tekniska aspekter och användarens perspektiv / Create a multi-user environment : A qualitative case study that examines technical aspects and the user's perspective

Geijersson, Hampus, Strandberg, Erik January 2018 (has links)
Studien avsåg att olika aspekter för att skapa underlag för utvecklingen med att förbättra fleranvändning. Syftet var, utöver de olika aspekterna, att på konceptuell nivå utvärdera hur fleranvändning kan utföras tekniskt samt undersöka hur användarna påverkas av denna förändring. Detta har utförts genom flertalet workshops och intervjuer. Vid två tillfällen har också författarna till rapporten utbildats i hur systemet är uppbyggt och hur det används. Utifrån detta har värdekriterier värderats mot olika tekniker. De teknikerna är Mutex, Semaphores och Oracle Tuxedo. På det sättet har olika aspekter beaktats och konceptuella modeller har målats upp. De olika teknikerna lever alla upp till de tekniska kraven som ställts, som programmeringsspråket C# och en databas från Oracle. I samverkan mellan användare och utvecklare har nivån på lösningen tagits fram, vad den ger användarna för nytta samtidigt som det inte är för komplicerat. Dessutom har det ställts krav angående prestandan, att den inte får försämras väsentligt. Användarna får ett liknande arbetssätt med minskade krav på samordning. Det medför att de kan samarbeta med de datamängder de behöver. Den lösning som passade bäst utifrån dessa kriterier var Mutex. / This paper is intended to study different aspects and take them in consideration to create a foundation for the developers to improve a multi-user environment. The purpose was to evaluate how to develop multi-user systems at a conceptual level with the technical aspects and describe how the users were affected by this. The study is based on multiple workshops and interviews. The writers have also been educated on how the system is used by the users and how the system was built, at two separate occasions. Based on this, criterias were made and these were valued against different techniques. These techniques are Mutex, Semaphores and Oracle Tuxedo. The criterias were considered and conceptual models were made to visualize the solution. The different techniques all live up to the constraints from the hardware and software of the case study, like the programming language C# and a database from Oracle. In a cooperation between the developers and users have the level of the solution been defined, where the solution ease the problems enough and still not too complex to implement. There have also been a few directions on the performance of the system. The solution should not be affecting the performance significantly. The users’working methods are not going to be particularly affected. The main part of the cooperation are not as needed as before. They can work in the same dataset concurrent in the datasets they required. On this basis the best technique to use in this case was Mutex.
115

Databázová nezávislost jádra systému pro dolování z dat FIT-Miner / Data Independency of the FIT-Miner Data Mining System

Novák, Ondřej January 2013 (has links)
System for data mining Fit-Miner is now dependant on only one specific DBMS. This master’s thesis  deals with analysis of implementation that works with database, modules and functions for data mining. Next it shows the set of changes which will allow FIT-Miner to work with another DBMS. And finally, a description of the implementation of these changes.
116

TSQL2 interpret nad post-relačními databázemi v Oracle Database / Processor of TSQL2 on Post-Relational Databases in Oracle Database

Szkandera, Jan January 2011 (has links)
This thesis focuses on temporal databases and their multimedia and spatial extensions. The introduction of this work summarizes results in the area of research of temporal databases - key concepts of a TSQL2 language and post-relational extension of Oracle database are introduced. Main part of the thesis is design of an interpreter as a layer between user application and relational database.  In the next part of the thesis control of integrity constraints in temporal databases are discussed. Result of this work is functional interpreter of TSQL2 language able to store post-relational data.
117

Approches nouvelles des modèles GARCH multivariés en grande dimension / New approaches for high-dimensional multivariate GARCH models

Poignard, Benjamin 15 June 2017 (has links)
Ce document traite du problème de la grande dimension dans des processus GARCH multivariés. L'auteur propose une nouvelle dynamique vine-GARCH pour des processus de corrélation paramétrisés par un graphe non dirigé appelé "vine". Cette approche génère directement des matrices définies-positives et encourage la parcimonie. Après avoir établi des résultats d'existence et d'unicité pour les solutions stationnaires du modèle vine-GARCH, l'auteur analyse les propriétés asymptotiques du modèle. Il propose ensuite un cadre général de M-estimateurs pénalisés pour des processus dépendants et se concentre sur les propriétés asymptotiques de l'estimateur "adaptive Sparse Group Lasso". La grande dimension est traitée en considérant le cas où le nombre de paramètres diverge avec la taille de l'échantillon. Les résultats asymptotiques sont illustrés par des expériences simulées. Enfin dans ce cadre l'auteur propose de générer la sparsité pour des dynamiques de matrices de variance covariance. Pour ce faire, la classe des modèles ARCH multivariés est utilisée et les processus correspondants à celle-ci sont estimés par moindres carrés ordinaires pénalisés. / This document contributes to high-dimensional statistics for multivariate GARCH processes. First, the author proposes a new dynamic called vine-GARCH for correlation processes parameterized by an undirected graph called vine. The proposed approach directly specifies positive definite matrices and fosters parsimony. The author provides results for the existence and uniqueness of stationary solution of the vine-GARCH model and studies its asymptotic properties. He then proposes a general framework for penalized M-estimators with dependent processes and focuses on the asymptotic properties of the adaptive Sparse Group Lasso regularizer. The high-dimensionality setting is studied when considering a diverging number of parameters with the sample size. The asymptotic properties are illustrated through simulation experiments. Finally, the author proposes to foster sparsity for multivariate variance covariance matrix processes within the latter framework. To do so, the multivariate ARCH family is considered and the corresponding parameterizations are estimated thanks to penalized ordinary least square procedures.
118

TEST ORACLE AUTOMATION WITH MACHINE LEARNING : A FEASIBILITY STUDY

Imamovic, Nermin January 2018 (has links)
The train represents a complex system, where every sub-system has an important role. If a subsystem doesn’t work how it should, the correctness of whole the train can be uncertain. To ensure that system works properly, we should test each sub-system individually and integrate them together in the whole system. Each of these subsystems consists of the different modules with different functionalities what should be tested. Testing of different functionalities often requires a different approach. For some functionalities, it is necessary domain knowledge from the human expert, such as classification of signals in different use cases in Propulsion and Controls (PPC) in Bombardier Transportation. Due to this reason, we need to simulate of using experts knowledge in the certain domain. We are investigating the use of machine learning techniques for solving this cases and creating system what will automatically classify different signals using the previous human knowledge. This case study is conducted in Bombardier Transportation (BT), Västerås in departments Train Control Management System (TCMS) and Propulsion and Controls (PPC), where data is collected, analyzed and evaluated. We proposed a method for solving the oracle problem based on machine learning approach for different for certain use case. Also, we explained different steps what can be used for solving the test oracle problem where signals are part of verdict process
119

PAC-Bayesian estimation of low-rank matrices / Estimation PAC-bayésienne de matrices de faible rang

MAI, The Tien 23 June 2017 (has links)
Les deux premi`eres parties de cette th`ese 'etudient respectivement des estimateurs pseudo-bay'esiens dans les probl`emes de compl'etion de matrices, et de tomographie quantique. Dans chaque probl`eme, on propose une loi a priori qui induit des matrices de faible rang. On 'etudie les performances statistiques: dans chacun des deux cas, on prouve des vitesses de convergence pour nos estimateurs. Notre analyse repose essentiellement sur des in'egalit'es PAC-Bay'esiennes. On propose aussi un algorithme MCMC pour impl'ementer notre estimateur. On teste ensuite ses performances sur des donn'ees simul'ees, et r'eelles. La derni`ere partie de la th`ese 'etudie le probl`eme de lifelong learning (que l'on peut traduire par apprentissage au long cours), o`u de l'information est conserv'ee et transf'er'ee d'un probl`eme d'apprentissage `a un autre. Nous proposons une formalisation de ce probl`eme dans un contexte de pr'ediction s'equentielle. Nous proposons un m'eta-algorithme pour le transfert d'information, qui repose sur l'agr'egation `a poids exponentiels. On prouve une borne sur le regret de cette m'ethode. Un avantage important de notre analyse est qu'elle ne requiert aucune hypoth`ese sur la forme des algorithmes d'apprentissages utilis'es `a l'int'erieur de chaque probl`eme. On termine cette partie par l''etude de quelques exemples: cas d'un nombre fini de pr'edicteurs, apprentissage d'une direction r'ev'elatrice, et apprentissage d'un dictionnaire. / The first two parts of the thesis study pseudo-Bayesian estimation for the problem of matrix completion and quantum tomography. A novel low-rank inducing prior distribution is proposed for each problem. The statistical performance is examined: in each case we provide the rate of convergence of the pseudo-Bayesian estimator. Our analysis relies on PAC-Bayesian oracle inequalities. We also propose an MCMC algorithm to compute our estimator. The numerical behavior is tested on simulated and real data sets. The last part of the thesis studies the lifelong learning problem, a scenario of transfer learning, where information is transferred from one learning task to another. We propose an online formalization of the lifelong learning problem. Then, a meta-algorithm is proposed for lifelong learning. It relies on the idea of exponentially weighted aggregation. We provide a regret bound on this strategy. One of the nice points of our analysis is that it makes no assumption on the learning algorithm used within each task. Some applications are studied in details: finite subset of relevant predictors, single index model, dictionary learning.
120

A Query, a Minute: Evaluating Performance Isolation in Cloud Databases

Kiefer, Tim, Schön, Hendrik, Habich, Dirk, Lehner, Wolfgang 02 February 2023 (has links)
Several cloud providers offer reltional databases as part of their portfolio. It is however not obvious how resource virtualization and sharing, which is inherent to cloud computing, influence performance and predictability of these cloud databases. Cloud providers give little to no guarantees for consistent execution or isolation from other users. To evaluate the performance isolation capabilities of two commercial cloud databases, we ran a series of experiments over the course of a week (a query, a minute) and report variations in query response times. As a baseline, we ran the same experiments on a dedicated server in our data center. The results show that in the cloud single outliers are up to 31 times slower than the average. Additionally, one can see a point in time after which the average performance of all executed queries improves by 38 %.

Page generated in 0.0597 seconds