• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 29
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Persistentní identifikátory a jejich využití v digitálních knihovnách / Persistent identifiers and their use in digital libraries

Jílková, Marta January 2013 (has links)
The aim of this thesis is to present persistent identification systems and analysis of the use of persistent identifiers for digital objects on a sample of national digital libraries. The first part is a brief introduction to digital libraries. The second part describes selected permanent identification systems. The third part explains elements of selected national digital libraries. The last section is an analysis of the use of persistent identifiers in various national digital libraries. Keywords: persistent identifier, persistent identification system, URI, URN, PURL, Handle, DOI, ARK, digital library, national library, digital object
22

Bayesian Analysis of Partitioned Demand Models

Smith, Adam Nicholas 26 October 2017 (has links)
No description available.
23

Semiparametric Bayesian Approach using Weighted Dirichlet Process Mixture For Finance Statistical Models

Sun, Peng 07 March 2016 (has links)
Dirichlet process mixture (DPM) has been widely used as exible prior in nonparametric Bayesian literature, and Weighted Dirichlet process mixture (WDPM) can be viewed as extension of DPM which relaxes model distribution assumptions. Meanwhile, WDPM requires to set weight functions and can cause extra computation burden. In this dissertation, we develop more efficient and exible WDPM approaches under three research topics. The first one is semiparametric cubic spline regression where we adopt a nonparametric prior for error terms in order to automatically handle heterogeneity of measurement errors or unknown mixture distribution, the second one is to provide an innovative way to construct weight function and illustrate some decent properties and computation efficiency of this weight under semiparametric stochastic volatility (SV) model, and the last one is to develop WDPM approach for Generalized AutoRegressive Conditional Heteroskedasticity (GARCH) model (as an alternative approach for SV model) and propose a new model evaluation approach for GARCH which produces easier-to-interpret result compared to the canonical marginal likelihood approach. In the first topic, the response variable is modeled as the sum of three parts. One part is a linear function of covariates that enter the model parametrically. The second part is an additive nonparametric model. The covariates whose relationships to response variable are unclear will be included in the model nonparametrically using Lancaster and Šalkauskas bases. The third part is error terms whose means and variance are assumed to follow non-parametric priors. Therefore we denote our model as dual-semiparametric regression because we include nonparametric idea for both modeling mean part and error terms. Instead of assuming all of the error terms follow the same prior in DPM, our WDPM provides multiple candidate priors for each observation to select with certain probability. Such probability (or weight) is modeled by relevant predictive covariates using Gaussian kernel. We propose several different WDPMs using different weights which depend on distance in covariates. We provide the efficient Markov chain Monte Carlo (MCMC) algorithms and also compare our WDPMs to parametric model and DPM model in terms of Bayes factor using simulation and empirical study. In the second topic, we propose an innovative way to construct weight function for WDPM and apply it to SV model. SV model is adopted in time series data where the constant variance assumption is violated. One essential issue is to specify distribution of conditional return. We assume WDPM prior for conditional return and propose a new way to model the weights. Our approach has several advantages including computational efficiency compared to the weight constructed using Gaussian kernel. We list six properties of this proposed weight function and also provide the proof of them. Because of the additional Metropolis-Hastings steps introduced by WDPM prior, we find the conditions which can ensure the uniform geometric ergodicity of transition kernel in our MCMC. Due to the existence of zero values in asset price data, our SV model is semiparametric since we employ WDPM prior for non-zero values and parametric prior for zero values. On the third project, we develop WDPM approach for GARCH type model and compare different types of weight functions including the innovative method proposed in the second topic. GARCH model can be viewed as an alternative way of SV for analyzing daily stock prices data where constant variance assumption does not hold. While the response variable of our SV models is transformed log return (based on log-square transformation), GARCH directly models the log return itself. This means that, theoretically speaking, we are able to predict stock returns using GARCH models while this is not feasible if we use SV model. Because SV models ignore the sign of log returns and provides predictive densities for squared log return only. Motivated by this property, we propose a new model evaluation approach called back testing return (BTR) particularly for GARCH. This BTR approach produces model evaluation results which are easier to interpret than marginal likelihood and it is straightforward to draw conclusion about model profitability by applying this approach. Since BTR approach is only applicable to GARCH, we also illustrate how to properly cal- culate marginal likelihood to make comparison between GARCH and SV. Based on our MCMC algorithms and model evaluation approaches, we have conducted large number of model fittings to compare models in both simulation and empirical study. / Ph. D.
24

MONARCH - Publikationsserver der Technischen Universität Chemnitz

Blumtritt, Ute 13 May 2005 (has links)
Der Vortrag wurde auf dem 5. Workshop DissOnline: Abschlussworkshop des DFG-Projektes "Aufbau einer Koordinierungsstelle für elektronische Hochschulschriften" am 25. Februar 2005 gehalten. Der Datentransfer elektronischer Hochschulschriften an die Deutsche Bibliothek (DDB) sowie den Südwestdeutschen Bibliotheksverbund (SWB) ist erläutert. Die Technologie zur Erfassung, Aufbereitung und Datenübermittlung ist für das Meldeinterface DDB und SWB äquivalent. Dabei erfolgt die automatische Vergabe von Persistent Identifiern (URN) für jedes Dokument. Die per Onlineformular vom Autor eingetragenen Metadaten werden zur Abholung und Speicherung auf dem Server der Deutschen Bibliothek in das Format XMetaDiss transformiert und über die OAI Schnittstelle zum Download bereitgestellt. Für die in MONARCH achivierten Dissertationen und Habilitationen ist die Langzeitverfügbarkeit und der persistente Zugriff auf den Volltext gesichert.
25

Random allocations: new and extended models and techniques with applications and numerics.

Kennington, Raymond William January 2007 (has links)
This thesis provides a general methodology for classifying and describing many combinatoric problems, systematising and finding theoretical expressions for quantities of interest, and investigating their feasible numerical evaluation. Unifying notation and definitions are provided. Our knowledge of random allocations is also extended. This is achieved by investigating new processes, generalising known processes, and by providing a formal structure and innovative techniques for analysing them. The random allocation models described in this thesis can be classified as occupancy urn models, in which we have a sequence of urns and throw balls into them, and investigate static, waiting-time and dynamic processes. Various structures are placed on the relationship(s) between cells, balls, and the selection of items being distributed, including varieties, batch arrivals, taboo sets and blocking sets. Static, waiting-time and dynamic processes are investigated. Both without-replacement and with-replacement sampling types are considered. Emphasis is placed on the distributions of waiting-times for one or more events to occur measured from the time a particular event occurs; this begins as an abstraction and generalisation of a model of departures of cars parked in lanes. One of several additional determinations is the platoon size distribution. Models are analysed using combinatorial analysis and Markov Chains. Global attributes are measured, including maximum waits, maximum room required, moments and the clustering of completions. Various conversion formulae have been devised to reduce calculation times by several orders of magnitude. New and extended applications include Queueing in Lanes, Cake Displays, Coupon Collector's Problem, Sock-Sorting, Matching Dependent Sets (including Genetic Code Attribute Matching and the game SET), the Zig-Zag Problem, Testing for Randomness (including the Cake Display Test, which is a without-replacement test similar to the standard Empty Cell test), Waiting for Luggage at an Airport, Breakdowns in a Network, Learning Theory and Estimating the Number of Skeletons at an Archaeological Dig. Fundamental, reduction and covering theorems provide ways to reduce the number of calculations required. New combinatorial identities are discovered and a well-known one is proved in a combinatorial way for the first time. Some known results are derived from simple cases of the general models. / http://proxy.library.adelaide.edu.au/login?url= http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1309598 / Thesis (Ph.D.) -- University of Adelaide, School of Mathematical Sciences, 2007
26

A Biased Urn Model for Taxonomic Identification / Ein gewichtetes Urnenmodell zur taxonomischen Identifikation

Surovcik, Katharina 26 June 2008 (has links)
No description available.
27

Random allocations: new and extended models and techniques with applications and numerics.

Kennington, Raymond William January 2007 (has links)
This thesis provides a general methodology for classifying and describing many combinatoric problems, systematising and finding theoretical expressions for quantities of interest, and investigating their feasible numerical evaluation. Unifying notation and definitions are provided. Our knowledge of random allocations is also extended. This is achieved by investigating new processes, generalising known processes, and by providing a formal structure and innovative techniques for analysing them. The random allocation models described in this thesis can be classified as occupancy urn models, in which we have a sequence of urns and throw balls into them, and investigate static, waiting-time and dynamic processes. Various structures are placed on the relationship(s) between cells, balls, and the selection of items being distributed, including varieties, batch arrivals, taboo sets and blocking sets. Static, waiting-time and dynamic processes are investigated. Both without-replacement and with-replacement sampling types are considered. Emphasis is placed on the distributions of waiting-times for one or more events to occur measured from the time a particular event occurs; this begins as an abstraction and generalisation of a model of departures of cars parked in lanes. One of several additional determinations is the platoon size distribution. Models are analysed using combinatorial analysis and Markov Chains. Global attributes are measured, including maximum waits, maximum room required, moments and the clustering of completions. Various conversion formulae have been devised to reduce calculation times by several orders of magnitude. New and extended applications include Queueing in Lanes, Cake Displays, Coupon Collector's Problem, Sock-Sorting, Matching Dependent Sets (including Genetic Code Attribute Matching and the game SET), the Zig-Zag Problem, Testing for Randomness (including the Cake Display Test, which is a without-replacement test similar to the standard Empty Cell test), Waiting for Luggage at an Airport, Breakdowns in a Network, Learning Theory and Estimating the Number of Skeletons at an Archaeological Dig. Fundamental, reduction and covering theorems provide ways to reduce the number of calculations required. New combinatorial identities are discovered and a well-known one is proved in a combinatorial way for the first time. Some known results are derived from simple cases of the general models. / http://proxy.library.adelaide.edu.au/login?url= http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1309598 / Thesis (Ph.D.) -- University of Adelaide, School of Mathematical Sciences, 2007
28

Formalising non-functional requirements embedded in user requirements notation (URN) models

Dongmo, Cyrille 11 1900 (has links)
The growing need for computer software in different sectors of activity, (health, agriculture, industries, education, aeronautic, science and telecommunication) together with the increasing reliance of the society as a whole on information technology, is placing a heavy and fast growing demand on complex and high quality software systems. In this regard, the anticipation has been on non-functional requirements (NFRs) engineering and formal methods. Despite their common objective, these techniques have in most cases evolved separately. NFRs engineering proceeds firstly, by deriving measures to evaluate the quality of the constructed software (product-oriented approach), and secondarily by improving the engineering process (process-oriented approach). With the ability to combine the analysis of both functional and non-functional requirements, Goal-Oriented Requirements Engineering (GORE) approaches have become de facto leading requirements engineering methods. They propose through refinement/operationalisation, means to satisfy NFRs encoded in softgoals at an early phase of software development. On the other side, formal methods have kept, so far, their promise to eliminate errors in software artefacts to produce high quality software products and are therefore particularly solicited for safety and mission critical systems for which a single error may cause great loss including human life. This thesis introduces the concept of Complementary Non-functional action (CNF-action) to extend the analysis and development of NFRs beyond the traditional goals/softgoals analysis, based on refinement/operationalisation, and to propagate the influence of NFRs to other software construction phases. Mechanisms are also developed to integrate the formal technique Z/Object-Z into the standardised User Requirements Notation (URN) to formalise GRL models describing functional and non-functional requirements, to propagate CNF-actions of the formalised NFRs to UCMs maps, to facilitate URN construction process and the quality of URN models. / School of Computing / D. Phil (Computer Science)
29

Les fondements du droit des sépultures / Foundations of the right of burials

Gailliard, Ariane 10 December 2015 (has links)
La sépulture est souvent appréhendée à titre d’exception ou par une superposition de notions : copropriété familiale, bien familial, chose hors commerce, indivision perpétuelle, droit réel spécial... Cette approche disparate dissimule l’existence d’un droit des sépultures qui peine, en conséquence, à constituer un droit unifié. Le droit des sépultures se trouve fractionné en plusieurs branches : droit civil, droit pénal et droit public. A travers elles, apparaissent de nombreuses problématiques, liées à la nature et au régime proposés. Pour ces raisons, il est nécessaire d’aborder le droit des sépultures par la recherche de ses fondements, inchangés depuis le droit romain et le droit médiéval. Le premier fondement est le sacré ; le second la communauté. Tous deux prennent leur source dans l’histoire du droit et continuent d’exister dans le droit positif. Ils font apparaître une unité du droit des sépultures, autour d’une double fonction : assurer la séparation du mort et du vivant et perpétuer le culte des morts. Du point de vue anthropologique, le sacré, premier fondement, se distingue du religieux, et se manifeste selon deux opérations : la délimitation d’une frontière entre sacré et profane par la séparation, puis la protection de ce nouvel espace délimité par la répression de toute atteinte. Pour les sépultures, ces deux opérations sont effectuées respectivement par l’extracommercialité et par la protection pénale. Le premier mécanisme est issu du droit romain et montre une protection originale de la sépulture ; toute activité juridique qui n’est pas incompatible avec le respect des morts est autorisée. L’autre mécanisme concerne l’incrimination de violation de sépulture, qui perpétue sa dimension sacrée. Le second fondement est communautaire : il est apparu pour les sépultures de famille avec les communautés médiévales, à une époque où les biens et les personnes étaient soudés en un groupe familial unique. Désormais adapté par l’affectation familiale, un tel fondement se maintient dans notre droit avec un régime de propriété collective, à travers la transmission successorale restreinte au groupe familial et un principe égalitaire, ce qui fait de la sépulture une véritable propriété communautaire. Bien sacré, propriété communautaire, les fondements des sépultures mettent en exergue des dimensions originales de la propriété. / Burials are often considered in terms of acceptions or superimpositions of notions: family co-ownership, family property, off-trade affairs, joint possession, specific real right… This multi-entry approach conceals the existence of a right of burial which, as a consequence, is difficult to define as a unified right. The right of burial is divided up into various branches— civil law, criminal law, public law—which rise various questions linked to the very nature if the different systems. For this reason, it is necessary to tackle the right of burial from the point of view of its foundations, which have not changed since the establishment of Roman law and Medieval law.The first founding principle concerns the sacred; the second is about the community. Both originate in legal history and are still valid in the field of positive law. They show a unity in the right of burial as regards two main functions: ensure the separation between the living and the dead and keep up the traditional practice of ancestor worship. From the anthropological viewpoint, the sacred—the first principle—distinguishes from the religious, and is expressed in two main missions: the definition of a frontier between the sacred and the profane by the separation, then the protection of this new space delineated by the suppression of any violation. For the burials, these two missions are respectively accomplished by a position out of commerce and by the criminal procedure. The first mechanism comes from Roman law and shows an original protection of the burial process; every legal activity which is not incompatible with the respect of the dead is allowed. The other mechanism concerns the incrimination of the violation of the burial process and its sacred nature. The second founding principle is about the community: it was created for family burials by medieval communities, at a time when properties and people were seen as a unique family unit. Nowadays adapted by the family affectation, such a principle is maintained in our legal system because of a collective ownership regime, through the transmission of the succession restricted to the family and an egalitarian principle, which turn burials into a property of the community. Sacred property, property of the community, the founding principles of burials bring to light specific dimensions of the concept of property.

Page generated in 0.0503 seconds