• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 447
  • 103
  • 99
  • 49
  • 43
  • 20
  • 17
  • 14
  • 11
  • 10
  • 7
  • 7
  • 6
  • 6
  • 4
  • Tagged with
  • 943
  • 165
  • 128
  • 106
  • 100
  • 96
  • 94
  • 94
  • 92
  • 88
  • 80
  • 73
  • 70
  • 70
  • 67
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Developing a Semi-Automatised Tool for Grading Brain Tumours with Susceptibility-Weighted MRI

Duvaldt, Maria January 2015 (has links)
Gliomas are a common type of brain tumour and for the treatment of a patient it is important to determine the tumour’s grade of malignancy. This is done today by a biopsy, a histopathological analysis of the tumourous tissue, that is classified by the World Health Organization on a malignancy scale from I to IV. Recent studies have shown that the local image variance (LIV) and the intratumoural susceptibility signal (ITSS) in susceptibility-weighted MR images correlate to the tumour grade. This thesis project aims to develop a software program as aid for the radiologists when grading a glioma. The software should by image analysis be able to separate the gliomas into low grade (I-II) and high grade (III-IV). The result is a graphical user interface written in Python 3.4.3. The user chooses an image, draws a region of interest and starts the analysis. The analyses implemented in the program are LIV and ITSS mentioned above, and the code can be extended to contain other types of analyses as research progresses. To validate the image analysis, 16 patients with glioma grades confirmed by biopsy are included in the study. Their susceptibility-weighted MR images were analysed with respect to LIV and ITSS, and the outcome of those image analyses was tested versus the known grades of the patients. No statistically significant difference could be seen between the high and the low grade group, in the case of LIV. This was probably due to hemorrhage and calcification, characteristic for some tumours and interpreted as blood vessels. Concerning ITSS a statistically significant difference could be seen between the high and the low grade group (p < 0.02). The sensitivity and specificity was 80% and 100% respec- tively. Among these 16 gliomas, 11 were astrocytic tumours and between low and high grade astrocytomas a statistically significant difference was shown. The degree of LIV was significantly different between the two groups (p < 0.03) and the sensitivity and specificity were 86% and 100% respectively. The degree of ITSS was significantly different between the two groups (p < 0.04) and the sensitivity and specificity were 86% and 100% respectively. Spearman correlation showed a correlation between LIV and tumour grade (for all gliomas r = 0.53 and p < 0.04, for astrocytomas r = 0.84 and p < 0.01). A correlation was also found between ITSS and tumour grade (for all gliomas r = 0.69 and p < 0.01, for astrocytomas r = 0.63 and p < 0.04). The results indicate that SWI is useful for distinguishing between high and low grade astrocytoma with 1.5T imaging within this cohort. It also seems possible to distinguish between high and low grade glioma with ITSS.
202

Political Economy of Committee Voting and Its Application

Takagi, Yuki 27 September 2013 (has links)
This dissertation consists of three essays on information aggregation in committees and its application. The first essay analyzes how the distribution of votes affects the accuracy of group decisions. In a weighted voting system, votes are typically assigned based on the criteria that are unrelated to the voters’ ability to make a correct judgment. I introduce an information aggregation model in which voters are identical except for voting shares. If the information is free, the optimal weight distribution is equal weighting. When acquiring information is costly, by contrast, I show that the accuracy of group decisions may be higher under some weighted majority rules than under unweighted majority rule. I characterize the equilibrium and find the optimal weight distribution to maximize the accuracy of group decisions. Asymmetric weight distributions may be optimal when the cost of improving signal is moderately high. The second essay analyzes how intergenerational family transfers can be sustained. Why are generous transfers from the younger to the older generations made in some families and not in others? My paper argues that differences in intergenerational dependence are due to variation in community networks. My analysis of the sustainability of intergenerational transfers posits game theoretical models of overlapping generations in which breadwinners make transfers to their parents and children. A novel feature of my models is that there is a local community that may supply information about its members past behaviors. I demonstrate that an efficient level of intergenerational transfers can be sustained if neighbors gossip about each other. The third essay, co-authored with Fuhito Kojima, investigates a jury decision when hung juries and retrials are possible. When jurors in subsequent trials know that previous trials resulted in hung juries, informative voting can be an equilibrium if and only if the accuracy of signals for innocence and guilt are exactly identical. Moreover, if jurors are informed of numerical split of votes in previous trials, informative voting is not an equilibrium regardless of signal accuracy. / Government
203

Μελέτη κατανομών μεγέθους συστάδας για επιγενή Poisson και συναφείς ασυμπτωτικές κατανομές

Κουσίδης, Σωκράτης 09 October 2008 (has links)
Σε προβλήματα ερμηνείας βιολογικών δεδομένων όπου οι υπό μελέτη μονάδες εμφανίζονται κατά συστάδες (cluster) τυχαίου μεγέθους και πλήθους, ιδιαίτερο ρόλο παίζουν οι επιγενείς κατανομές. Συγκεκριμένα ως επιγενής Poisson κατανομή μπορεί να παρασταθεί κάθε μονοδιάστατη διακριτή κατανομή η οποία είναι άπειρα διαιρετή. Έχει μελετηθεί, η περίπτωση στην οποία η κατανομή του μεγέθους της συστάδας (csd) είναι μια γενικευμένη (εισάγεται νέα παράμετρος) εξαρτώμενη μεγέθους (gsb) λογαριθμική κατανομή. Παίρνοντας τα όρια αυτής της παραμέτρου ως οριακές κατανομές προκύπτουν η ΝΝΒD και η Pόlya-Aeppli. Στη παρούσα διπλωματική μελετάται η κατανομή που προκύπτει όταν ως csd χρησιμοποιείται η gsb μιας οιασδήποτε κατανομής. Δίνεται η πιθανογεννήτρια και προσδιορίζονται οι ασυμπτωτικές κατανομές στη γενικότερη περίπτωση. Μελετώνται επίσης, οι ιδιότητες της κατανομής και δίνονται εκτιμητές με τις μεθόδους των ροπών και της μέγιστης πιθανοφάνειας. Ειδικότερα, παρουσιάζεται η περίπτωση της ακρότμητης Poisson που δίνει ως οριακές κατανομές τις Νeyman και Thomas και προσομοιώνονται δεδομένα. Εξάγονται επίσης, ως ειδική περίπτωση των γενικών τύπων, τα αποτελέσματα που έχουν αποδειχθεί για τη λογαριθμική κατανομή. Στη συνέχεια αναπτύσσονται αντίστοιχα γενικευμένα διδιάστατα μοντέλα τέτοιων κατανομών. Δίνονται επίσης οι περιθώριες και οι δεσμευμένες κατανομές τους, υπολογίζονται οι ροπές, και χρήσιμες σχέσεις για τα διδιάστατα μοντέλα. Τέλος, παρουσιάζονται ειδικές περιπτώσεις, όπως οι Sum-Symmetric Power-Series και δίνονται εφαρμογές των διδιαστάτων κατανομών που μελετήθηκαν. / In biological data interpretation domains, where the units we exam come along as clusters of random size and number, generalized distributions have a very major role. In particular, every univariate discrete distribution that is infinite divisible can be formed like a generalized Poisson distribution. The case where the cluster-size distribution is a generalized (a new parameter has been inserted) size-biased log-series distribution has been studied. Taking the limits of this parameter, as limited cases we have the NNBD and Polya-Aeppli distribution. In this diplomatic work, we study the distribution which arises when as a csd we use the gsb of a random distribution. We give the pgf and we see the asymptotic distributions in the general case. We also see the attributes of the distribution and we give estimators with the method of moments and maximum likelihood estimators. Specially, we report the case of Truncated Poisson, which gives Neyman and Thomas as limiting cases and we simulate some data. Likewise, we also see the results that have been proofed for the Log-Series distribution as a special case of the general formulas. Then, we see correspond generalized Bivariate models of these distributions. We also give the marginals and the conditional distributions, we find the moments and some useful relations about the Bivariate models. Final, we present special cases, like Sum-Symmetric Power-Series and we give applications of the Bivariate distributions that we saw. In biological data interpretation domains, where the units we exam come along as clusters of random size and number, generalized distributions have a very major role. In particular, every univariate discrete distribution that is infinite divisible can be formed like a generalized Poisson distribution. The case where the cluster-size distribution is a generalized (a new parameter has been inserted) size-biased log-series distribution has been studied. Taking the limits of this parameter, as limited cases we have the NNBD and Polya-Aeppli distribution. In this diplomatic work, we study the distribution which arises when as a csd we use the gsb of a random distribution. We give the pgf and we see the asymptotic distributions in the general case. We also see the attributes of the distribution and we give estimators with the method of moments and maximum likelihood estimators. Specially, we report the case of Truncated Poisson, which gives Neyman and Thomas as limiting cases and we simulate some data. Likewise, we also see the results that have been proofed for the Log-Series distribution as a special case of the general formulas. Then, we see correspond generalized Bivariate models of these distributions. We also give the marginals and the conditional distributions, we find the moments and some useful relations about the Bivariate models. Final, we present special cases, like Sum-Symmetric Power-Series and we give applications of the Bivariate distributions that we saw.
204

Diffusion-weighted Imaging (DWI) und Diffusion-tensor Imaging (DTI) zur Analyse möglicher Ausbreitungswege/-formen von malignen Gliomen / Diffusion weighted imaging (DWI) and diffusion tensor imaging (DTI) in the analysis of possible pathways and patterns of infiltration of malignant glioma

Goldmann, Torben 04 June 2013 (has links)
No description available.
205

Solving MAXSAT by Decoupling Optimization and Satisfaction

Davies, Jessica 08 January 2014 (has links)
Many problems that arise in the real world are difficult to solve partly because they present computational challenges. Many of these challenging problems are optimization problems. In the real world we are generally interested not just in solutions but in the cost or benefit of these solutions according to different metrics. Hence, finding optimal solutions is often highly desirable and sometimes even necessary. The most effective computational approach for solving such problems is to first model them in a mathematical or logical language, and then solve them by applying a suitable algorithm. This thesis is concerned with developing practical algorithms to solve optimization problems modeled in a particular logical language, MAXSAT. MAXSAT is a generalization of the famous Satisfiability (SAT) problem, that associates finite costs with falsifying various desired conditions where these conditions are expressed as propositional clauses. Optimization problems expressed in MAXSAT typically have two interacting components: the logical relationships between the variables expressed by the clauses, and the optimization component involving minimizing the falsified clauses. The interaction between these components greatly contributes to the difficulty of solving MAXSAT. The main contribution of the thesis is a new hybrid approach, MaxHS, for solving MAXSAT. Our hybrid approach attempts to decouple these two components so that each can be solved with a different technology. In particular, we develop a hybrid solver that exploits two sophisticated technologies with divergent strengths: SAT for solving the logical component, and Integer Programming (IP) solvers for solving the optimization component. MaxHS automatically and incrementally splits the MAXSAT problem into two parts that are given to the SAT and IP solvers, which work together in a complementary way to find a MAXSAT solution. The thesis investigates several improvements to the MaxHS approach and provides empirical analysis of its behaviour in practise. The result is a new solver, MaxHS, that is shown to be the most robust existing solver for MAXSAT.
206

Solving MAXSAT by Decoupling Optimization and Satisfaction

Davies, Jessica 08 January 2014 (has links)
Many problems that arise in the real world are difficult to solve partly because they present computational challenges. Many of these challenging problems are optimization problems. In the real world we are generally interested not just in solutions but in the cost or benefit of these solutions according to different metrics. Hence, finding optimal solutions is often highly desirable and sometimes even necessary. The most effective computational approach for solving such problems is to first model them in a mathematical or logical language, and then solve them by applying a suitable algorithm. This thesis is concerned with developing practical algorithms to solve optimization problems modeled in a particular logical language, MAXSAT. MAXSAT is a generalization of the famous Satisfiability (SAT) problem, that associates finite costs with falsifying various desired conditions where these conditions are expressed as propositional clauses. Optimization problems expressed in MAXSAT typically have two interacting components: the logical relationships between the variables expressed by the clauses, and the optimization component involving minimizing the falsified clauses. The interaction between these components greatly contributes to the difficulty of solving MAXSAT. The main contribution of the thesis is a new hybrid approach, MaxHS, for solving MAXSAT. Our hybrid approach attempts to decouple these two components so that each can be solved with a different technology. In particular, we develop a hybrid solver that exploits two sophisticated technologies with divergent strengths: SAT for solving the logical component, and Integer Programming (IP) solvers for solving the optimization component. MaxHS automatically and incrementally splits the MAXSAT problem into two parts that are given to the SAT and IP solvers, which work together in a complementary way to find a MAXSAT solution. The thesis investigates several improvements to the MaxHS approach and provides empirical analysis of its behaviour in practise. The result is a new solver, MaxHS, that is shown to be the most robust existing solver for MAXSAT.
207

Specification and verification of quantitative properties : expressions, logics, and automata

Monmège, Benjamin 24 October 2013 (has links) (PDF)
Automatic verification has nowadays become a central domain of investigation in computer science. Over 25 years, a rich theory has been developed leading to numerous tools, both in academics and industry, allowing the verification of Boolean properties - those that can be either true or false. Current needs evolve to a finer analysis, a more quantitative one. Extension of verification techniques to quantitative domains has begun 15 years ago with probabilistic systems. However, many other quantitative properties are of interest, such as the lifespan of an equipment, energy consumption of an application, the reliability of a program, or the number of results matching a database query. Expressing these properties requires new specification languages, as well as algorithms checking these properties over a given structure. This thesis aims at investigating several formalisms, equipped with weights, able to specify such properties: denotational ones - like regular expressions, first-order logic with transitive closure, or temporal logics - or more operational ones, like navigating automata, possibly extended with pebbles. A first objective of this thesis is to study expressiveness results comparing these formalisms. In particular, we give efficient translations from denotational formalisms to the operational one. These objects, and the associated results, are presented in a unified framework of graph structures. This permits to handle finite words and trees, nested words, pictures or Mazurkiewicz traces, as special cases. Therefore, possible applications are the verification of quantitative properties of traces of programs (possibly recursive, or concurrent), querying of XML documents (modeling databases for example), or natural language processing. Second, we tackle some of the algorithmic questions that naturally arise in this context, like evaluation, satisfiability and model checking. In particular, we study some decidability and complexity results of these problems depending on the underlying semiring and the structures under consideration (words, trees...). Finally, we consider some interesting restrictions of the previous formalisms. Some permit to extend the class of semirings on which we may specify quantitative properties. Another is dedicated to the special case of probabilistic specifications: in particular, we study syntactic fragments of our generic specification formalisms generating only probabilistic behaviors.
208

Kompiuterių tinklo srautų anomalijų aptikimo metodai / Detection of network traffic anomalies

Krakauskas, Vytautas 03 June 2006 (has links)
This paper describes various network monitoring technologies and anomaly detection methods. NetFlow were chosen for anomaly detection system being developed. Anomalies are detected using a deviation value. After evaluating quality of developed system, new enhancements were suggested and implemented. Flow data distribution was suggested, to achieve more precise NetFlow data representation, enabling a more precise network monitoring information usage for anomaly detection. Arithmetic average calculations were replaced with more flexible Exponential Weighted Moving Average algorithm. Deviation weight was introduced to reduce false alarms. Results from experiment with real life data showed that proposed changes increased precision of NetFlow based anomaly detection system.
209

State Estimation in Electrical Networks

Mosbah, Hossam 08 January 2013 (has links)
The continuous growth in power system electric grid by adding new substations lead to construct many new transmission lines, transformers, control devices, and circuit breakers to connect the capacity (generators) to the demand (loads). These components will have a very heavy influence on the performance of the electric grid. The renewable technical solutions for these issues can be found by robust algorithms which can give us a full picture of the current state of the electrical network by monitoring the behavior of phase and voltage magnitude. In this thesis, the major idea is to implement several algorithms including weighted least square, extend kalman filter, and interior point method in three different electrical networks including IEEE 14, 30, and 118 to compare the performance of these algorithms which is represented by the behavior of phases and magnitude voltages as well as minimize the residual of the balance load flow real time measurements to distinguish which one is more robust. Also to have a particular understanding of the comparison between unconstraint and constraint algorithms.
210

A NEW TEST TO BUILD CONFIDENCE REGIONS USING BALANCED MINIMUM EVOLUTION

Dai, Wei 16 August 2013 (has links)
In phylogenetic analysis, an important issue is to construct the confidence region for gene trees from DNA sequences. Usually estimation of the trees is the initial step. Maximum likelihood methods are widely applied but few tests are based on distance methods. In this thesis, we propose a new test based on balanced minimum evolution. We first examine the normality assumption of pairwise distance estimates under various model misspeci cations and also examine their variances, MSEs and squared biases. Then we compare the BME method with the WLS method in true tree reconstruction under different variance structures and model pairs. Finally, we develop a new test for finding a confidence region for the tree based on the BME method and demonstrate its effectiveness through simulation.

Page generated in 0.2982 seconds