• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 42
  • 10
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 81
  • 24
  • 13
  • 12
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

A Java Framework for Broadcast Encryption Algorithms / Ett ramverk i Java för prestandatest av broadcast-krypteringsalgoritmer

Hesselius, Tobias, Savela, Tommy January 2004 (has links)
Broadcast encryption is a fairly new area in cryptology. It was first addressed in 1992, and the research in this area has been large ever since. In short, broadcast encryption is used for efficient and secure broadcasting to an authorized group of users. This group can change dynamically, and in some cases only one-way communication between the sender and receivers is available. An example of this is digital TV transmissions via satellite, in which only the paying customers can decrypt and view the broadcast. The purpose of this thesis is to develop a general Java framework for implementation and performance analysis of broadcast encryption algorithms. In addition to the actual framework a few of the most common broadcast encryption algorithms (Complete Subtree, Subset Difference, and the Logical Key Hierarchy scheme) have been implemented in the system. This master’s thesis project was defined by and carried out at the Information Theory division at the Department of Electrical Engineering (ISY), Linköping Institute of Technology, during the first half of 2004.
52

Minimal model reasoning for modal logic

Papacchini, Fabio January 2015 (has links)
Model generation and minimal model generation are useful for tasks such as model checking, query answering and for debugging of logical specifications. Due to this variety of applications, several minimality criteria and model generation methods for classical logics have been studied. Minimal model generation for modal logics how ever did not receive the same attention from the research community. This thesis aims to fill this gap by investigating minimality criteria and designing minimal model generation procedures for all the sublogics of the multi-modal logic S5(m) and their extensions with universal modalities. All the procedures are minimal model sound and complete, in the sense that they generate all and only minimal models. The starting point of the investigation is the definition of a Herbrand semantics for modal logics on which a syntactic minimality criterion is devised. The syntactic nature of the minimality criterion allows for an efficient minimal model generation procedure, but, on the other hand, the resulting minimal models can be redundant or semantically non minimal with respect to each other. To overcome the syntactic limitations of the first minimality criterion, the thesis moves from minimal modal Herbrand models to semantic minimality criteria based on subset-simulation. At first, theoretical procedures for the generation of models minimal modulo subset-simulation are presented. These procedures for the generation of models minimal modulo subset-simulation are minimal model sound and complete, but they might not terminate. The minimality criterion and the procedures are then refined in such a way that termination can be ensured while preserving minimal model soundness and completeness.
53

Discrepancy-based algorithms for best-subset model selection

Zhang, Tao 01 May 2013 (has links)
The selection of a best-subset regression model from a candidate family is a common problem that arises in many analyses. In best-subset model selection, we consider all possible subsets of regressor variables; thus, numerous candidate models may need to be fit and compared. One of the main challenges of best-subset selection arises from the size of the candidate model family: specifically, the probability of selecting an inappropriate model generally increases as the size of the family increases. For this reason, it is usually difficult to select an optimal model when best-subset selection is attempted based on a moderate to large number of regressor variables. Model selection criteria are often constructed to estimate discrepancy measures used to assess the disparity between each fitted candidate model and the generating model. The Akaike information criterion (AIC) and the corrected AIC (AICc) are designed to estimate the expected Kullback-Leibler (K-L) discrepancy. For best-subset selection, both AIC and AICc are negatively biased, and the use of either criterion will lead to overfitted models. To correct for this bias, we introduce a criterion AICi, which has a penalty term evaluated from Monte Carlo simulation. A multistage model selection procedure AICaps, which utilizes AICi, is proposed for best-subset selection. In the framework of linear regression models, the Gauss discrepancy is another frequently applied measure of proximity between a fitted candidate model and the generating model. Mallows' conceptual predictive statistic (Cp) and the modified Cp (MCp) are designed to estimate the expected Gauss discrepancy. For best-subset selection, Cp and MCp exhibit negative estimation bias. To correct for this bias, we propose a criterion CPSi that again employs a penalty term evaluated from Monte Carlo simulation. We further devise a multistage procedure, CPSaps, which selectively utilizes CPSi. In this thesis, we consider best-subset selection in two different modeling frameworks: linear models and generalized linear models. Extensive simulation studies are compiled to compare the selection behavior of our methods and other traditional model selection criteria. We also apply our methods to a model selection problem in a study of bipolar disorder.
54

Future Projection of Drought in the Indochina Region Based on the Optimal Ensemble Subset of CMIP5 Models / CMIP5モデルの最適アンサンブルサブセットに基づくインドシナ地域における干ばつの将来予測

CHHIN, Rattana 25 March 2019 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(理学) / 甲第21578号 / 理博第4485号 / 新制||理||1644(附属図書館) / 京都大学大学院理学研究科地球惑星科学専攻 / (主査)教授 余田 成男, 教授 秋友 和典, 准教授 石岡 圭一 / 学位規則第4条第1項該当 / Doctor of Science / Kyoto University / DGAM
55

On Product and Sum Decompositions of Sets: The Factorization Theory of Power Monoids

Antoniou, Austin A. 10 September 2020 (has links)
No description available.
56

Analysis of Subset Chimerism for MRD-Detection and Pre-Emptive Treatment in AML

Georgi, Julia-Annabell, Stasik, Sebastian, Bornhäuser, Martin, Platzbecker, Uwe, Thiede, Christian 05 April 2023 (has links)
Allogeneic hematopoietic stem cell transplantation (alloHCT) represents the only potentially curative treatment in high-risk AML patients, but up to 40% of patients suffer from relapse after alloHCT. Treatment of overt relapse poses a major therapeutic challenge and long-term disease control is achieved only in a minority of patients. In order to avoid post-allograft relapse, maintenance as well as pre-emptive therapy strategies based on MRD-detection have been used. A prerequisite for the implementation of pre-emptive therapy is the accurate identification of patients at risk for imminent relapse. Detection of measurable residual disease (MRD) represents an effective tool for early relapse prediction in the post-transplant setting. However, using established MRD methods such as multicolor flow cytometry or quantitative PCR, sensitive MRD monitoring is only applicable in about half of the patients with AML and advanced MDS undergoing alloHCT. Donor chimerism analysis, in particular when performed on enriched leukemic stem and progenitor cells, e.g. CD34+ cells, is a sensitive method and has emerged as an alternative option in the post alloHCT setting. In this review, we will focus on the current strategies for lineage specific chimerism analysis, results of pre-emptive treatment using this technology as well as future developments in this field.
57

Identificação e mapeamento de áreas de deslizamentos associadas a rodovias utilizando imagens de sensoriamento remoto. / Identification and mapping of landslide areas associated to roads using remote sensing images.

Manfré, Luiz Augusto 13 March 2015 (has links)
Ferramentas de geoinformação possuem grande aplicabilidade na compreensão e no mapeamento de deslizamentos. Considerando-se a importância dos componentes do relevo e da cobertura do solo neste processo, torna-se essencial o estabelecimento de metodologias para a síntese de informações do relevo e para a identificação de cicatrizes de deslizamento, de maneira a facilitar o monitoramento de áreas de risco. O objetivo desta Tese é propor metodologias de processamento digital de imagens para o mapeamento e identificação de cicatrizes de deslizamento próximo a rodovias. Um deslizamento de grande porte com várias consequências econômicas, ocorrido no ano de 1999, às margens da Rodovia Anchieta, na bacia hidrográfica do Rio Pilões foi utilizado como área de estudo deste trabalho. Utilizando dados gratuitos, mapas de cobertura do solo e de compartimentação do relevo foram gerados e analisados conjuntamente para a identificação das áreas de potenciais cicatrizes na região das Rodovias Anchieta e Imigrantes. A análise do relevo foi realizada utilizando técnicas de classificação baseada em objeto. A identificação de áreas de cicatrizes de deslizamento foi realizada através da avaliação de duas estratégias metodológicas: uma utilizando o algoritmo de classificação supervisionada SVM (Support Vector Machine) aplicado ao índice de vegetação NDVI (Normalized Difference Vegetation Index) e outra que utilizando combinação entre diferentes classificadores para a composição de uma classificação final. Os resultados obtidos para o mapeamento do relevo mostraram que a metodologia proposta possui grande potencial para a descrição de feições do relevo, com maior nível de detalhamento, facilitando a identificação de áreas com grande potencial de ocorrência de deslizamentos. Ambas as metodologias de identificação de cicatrizes de deslizamento apresentaram bons resultados, sendo que a combinação entre os algoritmos SVM, Redes Neurais e Máxima Verossimilhança apresentou o resultado mais adequado com os objetivos do trabalho, atingindo erro de omissão inferior a 10% para a classe de deslizamento. A combinação dos dois produtos permitiu a análise e identificação de diversas áreas de potenciais cicatrizes de deslizamento associadas à rodovias na região de estudo. A metodologia proposta possui ampla replicabilidade, podendo ser utilizada para análises de risco associadas a assentamentos urbanos, empreendimentos lineares e para o planejamento territorial e ambiental. / Geoinformation tools have great applicability in understanding and mapping landslides. Considering the significance of releif components and land cover in this process, it is essential the establishment of methods for the synthesis of the relief information and identification landslides, aiming to facilitate areas risk monitoring. The objective of this Dissertation is to propose digital image processing methodologies for map and identify landslide near to highways. A large landslide with several economic consequences was used as a study area of this work, occurred in 1999, near the Highway Anchieta, in Piloes river basin. Using free data, land cover and relief subdivsion maps were generated and intersected to identify areas of potential landslides in the region of Highways Anchieta and Imigrantes. The relief analysis was performed using based on object classification techniques. The identification of the landslide was performed by evaluating two methodological strategies: one using the supervised classification algorithm SVM (Support Vector Machine) applied to the NDVI vegetation index (Normalized Difference Vegetation Index) and another using combination of different classifiers for the composition of a final classification. The results obtained for relief mapping showed that the proposed method has great potential for the description of the relief features, with greater detail, facilitating the identification of areas with high potential for occurrence of landslides. Both landslides identification methodologies showed good results, and the combination of SVM, Neural Network and Maximum Likelihood algorithms presented the most appropriate result, reaching omission error of less than 10% for the landslide class. The combination of the two products allowed the analysis and identification of several areas of potential landslide scars associated with roads in the study area. The proposed methodology has extensive replication and can be used for risk analysis associated with urban settlements, linear infrastructures and the territorial and environmental planning.
58

Sequential Design of Experiments to Estimate a Probability of Failure. / Planification d'expériences séquentielle pour l'estimation de probabilités de défaillance

Li, Ling 16 May 2012 (has links)
Cette thèse aborde le problème de l'estimation de la probabilité de défaillance d'un système à partir de simulations informatiques. Lorsqu'on dispose seulement d'un modèle du système coûteux à simuler, le budget de simulations est généralement très limité, ce qui est incompatible avec l’utilisation de méthodes Monte Carlo classiques. En fait, l’estimation d’une petite probabilité de défaillance à partir de simulations très coûteuses, comme on peut rencontrer dans certains problèmes industriels complexes, est un sujet particulièrement difficile. Une approche classique consiste à remplacer le modèle coûteux à simuler par un modèle de substitution nécessitant de faibles ressources informatiques. A partir d’un tel modèle de substitution, deux opérations peuvent être réalisées. La première opération consiste à choisir des simulations, en nombre aussi petit que possible, pour apprendre les régions de l’espace des paramètres du système qui construire de bons estimateurs de la probabilité de défaillance. Cette thèse propose deux contributions. Premièrement, nous proposons des stratégies de type SUR (Stepwise Uncertainty Reduction) à partir d’une formulation bayésienne du problème d’estimation d’une probabilité de défaillance. Deuxièmement, nous proposons un nouvel algorithme, appelé Bayesian Subset Simulation, qui prend le meilleur de l’algorithme Subset Simulation et des approches séquentielles bayésiennes utilisant la modélisation du système par processus gaussiens. Ces nouveaux algorithmes sont illustrés par des résultats numériques concernant plusieurs exemples de référence dans la littérature de la fiabilité. Les méthodes proposées montrent de bonnes performances par rapport aux méthodes concurrentes. / This thesis deals with the problem of estimating the probability of failure of a system from computer simulations. When only an expensive-to-simulate model of the system is available, the budget for simulations is usually severely limited, which is incompatible with the use of classical Monte Carlo methods. In fact, estimating a small probability of failure with very few simulations, as required in some complex industrial problems, is a particularly difficult topic. A classical approach consists in replacing the expensive-to-simulate model with a surrogate model that will use little computer resources. Using such a surrogate model, two operations can be achieved. The first operation consists in choosing a number, as small as possible, of simulations to learn the regions in the parameter space of the system that will lead to a failure of the system. The second operation is about constructing good estimators of the probability of failure. The contributions in this thesis consist of two parts. First, we derive SUR (stepwise uncertainty reduction) strategies from a Bayesian-theoretic formulation of the problem of estimating a probability of failure. Second, we propose a new algorithm, called Bayesian Subset Simulation, that takes the best from the Subset Simulation algorithm and from sequential Bayesian methods based on Gaussian process modeling. The new strategies are supported by numerical results from several benchmark examples in reliability analysis. The methods proposed show good performances compared to methods of the literature.
59

Parameter Estimation and Optimal Design Techniques to Analyze a Mathematical Model in Wound Healing

Karimli, Nigar 01 April 2019 (has links)
For this project, we use a modified version of a previously developed mathematical model, which describes the relationships among matrix metalloproteinases (MMPs), their tissue inhibitors (TIMPs), and extracellular matrix (ECM). Our ultimate goal is to quantify and understand differences in parameter estimates between patients in order to predict future responses and individualize treatment for each patient. By analyzing parameter confidence intervals and confidence and prediction intervals for the state variables, we develop a parameter space reduction algorithm that results in better future response predictions for each individual patient. Moreover, use of another subset selection method, namely Structured Covariance Analysis, that considers identifiability of parameters, has been included in this work. Furthermore, to estimate parameters more efficiently and accurately, the standard error (SE- )optimal design method is employed, which calculates optimal observation times for clinical data to be collected. Finally, by combining different parameter subset selection methods and an optimal design problem, different cases for both finding optimal time points and intervals have been investigated.
60

Algorithms for irreducible infeasible subset detection in CSP - Application to frequency planning and graph k-coloring / Algorithmes pour la détection d'un sous ensemble irréalisable irréductible dans un CSP - Applications aux problèmes d'affectation des fréquences et problème de k-coloration

Hu, Jun 27 November 2012 (has links)
L’affectation de fr´equences (AFP) consiste `a attribuer des fr´equences radio aux liens de communications d’un r´eseauen respectant un spectre de fr´equences donn´e et des contraintes d’interf´erence ´electromagn´etique sur les liens. Vu lalimitation des ressources spectrales pour chaque application, les ressources en fr´equences sont souvent insuffisantespour d´eployer un r´eseau sans interf´erence. Dans ce cas, le r´eseau est surcontraint et le probl`eme est irr´ealisable.R´esoudre le probl`eme consiste alors `a identifier les zones surcontraintes pour en revoir la conception.Le travail que nous pr´esentons concerne la recherche d’une de ces zones surcontraintes avec une approche algo-rithmique bas´ee sur la mod´elisation du probl`eme par un CSP. Le probl`eme de l’affectation de fr´equences doit doncˆetre mod´elis´e comme un probl`eme de satisfaction de contraintes (CSP) qui est repr´esent´e par un tripl´e : un ensemblede variables (les liens radio), un ensemble de contraintes (les interf´erences ´electromagn´etiques), et un ensemble dedomaines (les fr´equences admises).Sous forme de CSP, une zone perturb´ee peut ˆetre consid´er´ee comme un sous-ensemble irr´ealisable irr´eductible duprobl`eme (IIS pour Irreductible Infeasible Subset). Un IIS est un sous probl`eme de taille minimale qui est irr´ealisable,c’est-`a-dire que tous les sous-ensembles d’un IIS sont r´ealisables. L’identification d’un IIS dans un CSP se rapporte `a deux r´esultats g´en´eraux int´eressants. Premi`erement, en localisant un IIS on peut plus facilement prouver l’irr´ealisabilit´ed’un probl`eme donn´e car l’irr´ealisabilit´e d’un IIS, qui est suppos´e ˆetre petit par rapport au probl`eme complet, est plusrapidement calculable que sur le probl`eme entier. Deuxi`emement, on peut localiser la raison de l’irr´ealisabilit´e; dansce cas, sur un probl`eme r´eel, le d´ecideur peut proposer des solutions pour relˆacher des contraintes de l’IIS, et peut-ˆetre aboutir `a une solution r´ealisable pour son probl`eme. La recherche d’IIS consiste donc `a r´esoudre un probl`emefondamental qui fait partie des outils de prise de d´ecision.Ce travail propose des algorithmes pour identifier un IIS dans un CSP incoh´erent. Ces algorithmes ont ´et´e test´essur des instances connues du probl`eme de l’affectation des fr´equences et du probl`eme de k-coloration de graphe. Lesr´esultats ont montr´es d’une grande am´elioration sur des instances du probl`eme de l’affectation des fr´equences parrapport aux m´ethodes connues. / The frequency assignment (FAP) consists in assigning the frequency on the radio links of a network which satisfiesthe electromagnetic interference among the links. Given the limited spectrum resources for each application, the fre-quency resources are often insufficient to deploy a wireless network without interference. In this case, the network isover-contrained and the problem is infeasible. Our objective is to identify an area with heavy interference.The work presented here concerns the detection for one of these areas with an algorithmic approach based onmodeling the problem by CSP. The problem of frequency assignment can be modeled as a constraint satisfactionproblem (CSP) which is represented by a triple: a set of variables (radio links), a set of constraints (electromagneticinterference) and a set of available frequencies.The interfered area in CSP can be considered a subset of irreducible feasible subset (IIS). An IIS is a infeasiblesubproblem with irreducible size, that is to say that all subsets of an IIS are feasible. The identification of an IIS ina CSP refers to two general interests. First, locating an IIS can easily prove the infeasibility of the problem. Becausethe size of IIS is assumed to be smaller compared to the entire problem, its infeasibility is relatively easier to prove.Second, we can locate the reason of infeasibility, in this case, the decision maker can provide the solutions to relax theconstraints inside IIS, which perhaps leads to a feasible solution to the problem.This work proposes algorithms to identify an IIS in the over-constrained CSP. These algorithms have tested on the well known benchmarks of the FAP and of the problem of graph k-coloring. The results show a significant improve-ment on instances of FAP compared to known methods.

Page generated in 0.0308 seconds