• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 447
  • 103
  • 99
  • 49
  • 43
  • 20
  • 17
  • 14
  • 11
  • 10
  • 7
  • 7
  • 6
  • 6
  • 4
  • Tagged with
  • 943
  • 165
  • 128
  • 106
  • 100
  • 96
  • 94
  • 94
  • 92
  • 88
  • 80
  • 73
  • 70
  • 70
  • 67
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
521

The Method Of Lines Solution Of Discrete Ordinates Method For Nongray Media

Cayan, Fatma Nihan 01 July 2006 (has links) (PDF)
A radiation code based on method of lines (MOL) solution of discrete ordinates method (DOM) for the prediction of radiative heat transfer in nongray absorbing-emitting media was developed by incorporation of two different gas spectral radiative property models, namely wide band correlated-k (WBCK) and spectral line-based weighted sum of gray gases (SLW) models. Predictive accuracy and computational efficiency of the developed code were assessed by applying it to the predictions of source term distributions and net wall radiative heat fluxes in several one- and two-dimensional test problems including isothermal/non-isothermal and homogeneous/non-homogeneous media of water vapor, carbon dioxide or mixture of both, and benchmarking its steady-state predictions against line-by-line (LBL) solutions and measurements available in the literature. In order to demonstrate the improvements brought about by these two spectral models over and above the ones obtained by gray gas approximation, predictions obtained by these spectral models were also compared with those of gray gas model. Comparisons reveal that MOL solution of DOM with SLW model produces the most accurate results for radiative heat fluxes and source terms at the expense of computation time when compared with MOL solution of DOM with WBCK and gray gas models. In an attempt to gain an insight into the conditions under which the source term predictions obtained with gray gas model produce acceptable accuracy for engineering applications when compared with those of gas spectral radiative property models, a parametric study was also performed. Comparisons reveal reasonable agreement for problems containing low concentration of absorbing-emitting media at low temperatures. Overall evaluation of the performance of the radiation code developed in this study points out that it provides accurate solutions with SLW model and can be used with confidence in conjunction with computational fluid dynamics (CFD) codes based on the same approach.
522

Efficient Index Structures For Video Databases

Acar, Esra 01 February 2008 (has links) (PDF)
Content-based retrieval of multimedia data has been still an active research area. The efficient retrieval of video data is proven a difficult task for content-based video retrieval systems. In this thesis study, a Content-Based Video Retrieval (CBVR) system that adapts two different index structures, namely Slim-Tree and BitMatrix, for efficiently retrieving videos based on low-level features such as color, texture, shape and motion is presented. The system represents low-level features of video data with MPEG-7 Descriptors extracted from video shots by using MPEG-7 reference software and stored in a native XML database. The low-level descriptors used in the study are Color Layout (CL), Dominant Color (DC), Edge Histogram (EH), Region Shape (RS) and Motion Activity (MA). Ordered Weighted Averaging (OWA) operator in Slim-Tree and BitMatrix aggregates these features to find final similarity between any two objects. The system supports three different types of queries: exact match queries, k-NN queries and range queries. The experiments included in this study are in terms of index construction, index update, query response time and retrieval efficiency using ANMRR performance metric and precision/recall scores. The experimental results show that using BitMatrix along with Ordered Weighted Averaging method is superior in content-based video retrieval systems.
523

Multi-class Classification Methods Utilizing Mahalanobis Taguchi System And A Re-sampling Approach For Imbalanced Data Sets

Ayhan, Dilber 01 April 2009 (has links) (PDF)
Classification approaches are used in many areas in order to identify or estimate classes, which different observations belong to. The classification approach, Mahalanobis Taguchi System (MTS) is analyzed and further improved for multi-class classification problems under the scope of this thesis study. MTS tries to explore significant variables and classify a new observation based on its Mahalanobis distance (MD). In this study, first, sample size problems, which are encountered mostly in small data sets, and multicollinearity problems, which constitute some limitations of MTS, are analyzed and a re-sampling approach is explored as a solution. Our re-sampling approach, which only works for data sets with two classes, is a combination of over-sampling and under-sampling. Over-sampling is based on SMOTE, which generates the synthetic observations between the nearest neighbors of observations in the minority class. In addition, MTS models are used to test the performance of several re-sampling parameters, for which the most appropriate values are sought specific to each case. In the second part, multi-class classification methods with MTS are developed. An algorithm, namely Feature Weighted Multi-class MTS-I (FWMMTS-I), is inspired by the descent feature weighted MD. It relaxes adding up of the MDs for variables equally. This provides representations of noisy variables with weights close to zero so that they do not mask the other variables. As a second multi-class classification algorithm, the original MTS method is extended to multi-class problems, which is called Multi-class MTS (MMTS). In addition, a comparable approach to that of Su and Hsiao (2009), which also considers weights of variables, is studied with a modification in MD calculation. It is named as Feature Weighted Multi-class MTS-II (FWMMTS-II). The methods are compared on eight different multi-class data sets using a 5-fold stratified cross validation approach. Results show that FWMMTS-I is as accurate as MMTS, and they are better than FWMMTS-II. Interestingly, the Mahalanobis Distance Classifier (MDC) using all the variables directly in the classification model has performed equally well on the studied data sets.
524

The Isoperimetric Problem On Trees And Bounded Tree Width Graphs

Bharadwaj, Subramanya B V 09 1900 (has links)
In this thesis we study the isoperimetric problem on trees and graphs with bounded treewidth. Let G = (V,E) be a finite, simple and undirected graph. For let δ(S,G)= {(u,v) ε E : u ε S and v ε V – S }be the edge boundary of S. Given an integer i, 1 ≤ i ≤ | V| , let the edge isoperimetric value of G at I be defined as be(i,G)= mins v;|s|= i | δ(S,G)|. For S V, let φ(S,G) = {u ε V – S : ,such that be the vertex boundary of S. Given an integer i, 1 ≤ i ≤ | V| , let the vertex isoperimetric value of G at I be defined as bv(i,G)= The edge isoperimetric peak of G is defined as be(G) =. Similarly the vertex isoperimetric peak of G is defined as bv(G)= .The problem of determining a lower bound for the vertex isoperimetric peak in complete k-ary trees of depth d,Tdkwas recently considered in[32]. In the first part of this thesis we provide lower bounds for the edge and vertex isoperimetric peaks in complete k-ary trees which improve those in[32]. Our results are then generalized to arbitrary (rooted)trees. Let i be an integer where . For each i define the connected edge isoperimetric value and the connected vertex isoperimetric value of G at i as follows: is connected and is connected A meta-Fibonacci sequence is given by the reccurence a(n)= a(x1(n)+ a1′(n-1))+ a(x2(n)+ a2′(n -2)), where xi: Z+ → Z+ , i =1,2, is a linear function of n and ai′(j)= a(j) or ai′(j)= -a(j), for i=1,2. Sequences belonging to this class have been well studied but in general their properties remain intriguing. In the second part of the thesis we show an interesting connection between the problem of determining and certain meta-Fibonacci sequences. In the third part of the thesis we study the problem of determining be and bv algorithmically for certain special classes of graphs. Definition 0.1. A tree decomposition of a graph G = (V,E) is a pair where I is an index set, is a collection of subsets of V and T is a tree whose node set is I such that the following conditions are satisfied: (For mathematical equations pl see the pdf file)
525

權重效用在網路問題上之研究 / A Study on Weighted Utilizations of Network Dimensioning Problems

程雅惠, Cheng,Ya Hui Unknown Date (has links)
我們以公平頻寬配置考慮網路上多重等級與多重服務品質的效用函數, 利用權重效用函數提出兩種數學最佳化模型。 這兩個模型的目標都是要尋找權重效用函數總和值的最大值。 本篇論文特別以權重為決策變數, 研究最佳權重的行為模式, 並求得最佳權重分佈公式。 我們發現模型I的總權重效用只看重某個效用值最大的等級, 完全忽略其他效用值較小的等級; 即最大效用函數的最佳權重為1,其他效用較小的最佳權重為0。 在最佳化過程中, 模型II的數值資料呈現出最佳權重架構為:最佳權重中的每個權重均相等,且總和為1。 我們隨後證明這些結果,並利用GAMS軟體來呈現數值資料。 / We propose two mathematical models with weighted utility functions for the fair bandwidth allocation and QoS routing in communication networks which offer multiple services for several classes of users. The formulation and numerical experiments are carried out in a general utility-maximizing framework. In this work, instead of being fixed, the weight for each utility function is taken as a free variable. The objective of this thesis is to find the structure of optimal weights that maximize the weighted sum of utilities of the bandwidth allocation for each class. We solve it by proposing two models in terms of fairness. Model I and II are constructed to compare different choices for optimal weights. For Model I, the structure of optimal weights form a vector which consists of one for a class and zero otherwise. For Model II, the form of optimal weights is that each weight of utility function is equally assigned. The results are proved and illustrated by software GAMS numerically.
526

Approximation Spaces in the Numerical Analysis of Cauchy Singular Integral Equations

Luther, Uwe 01 August 2005 (has links) (PDF)
The paper is devoted to the foundation of approximation methods for integral equations of the form (aI+SbI+K)f=g, where S is the Cauchy singular integral operator on (-1,1) and K is a weakly singular integral operator. Here a,b,g are given functions on (-1,1) and the unknown function f on (-1,1) is looked for. It is assumed that a and b are real-valued and Hölder continuous functions on [-1,1] without common zeros and that g belongs to some weighted space of Hölder continuous functions. In particular, g may have a finite number of singularities. Based on known spectral properties of Cauchy singular integral operators approximation methods for the numerical solution of the above equation are constructed, where both aspects the theoretical convergence and the numerical practicability are taken into account. The weighted uniform convergence of these methods is studied using a general approach based on the theory of approximation spaces. With the help of this approach it is possible to prove simultaneously the stability, the convergence and results on the order of convergence of the approximation methods under consideration.
527

A multi-criteria approach to the evaluation of food safety interventions.

Dunn, Alexander Hiram January 2015 (has links)
New Zealand faces a range of food safety hazards. Microbial hazards alone were estimated to cause over 2,000 years of lost healthy life in 2011 (Cressey, 2012) and $62m in medical costs and lost productivity in 2009 (Gadiel & Abelson, 2010). Chemical hazards are thought to be well managed through existing controls (Vannoort & Thomson, 2009) whereas microbial hazards are considered harder to control, primarily due to their ability to reproduce along the food production chain. Microbial hazards are thought to cause the majority of acute foodborne gastroenteritis. This research reviewed food safety literature and official documentation, and conducted 55 interviews, mostly with food safety experts from different stakeholder groups, to examine the food safety decision-making environment in New Zealand. This research explores the concept of the ‘stakeholder’ in the context of food safety decision-making and proposes an inclusive ‘stakeholder’ definition as any group which is able to affect, or be affected by, the decision-making process. Utilising this definition, and guided by interviews, New Zealand stakeholders in food safety decision-making were identified and classified as follows: •Regulators •Public health authorities •Food safety scientists/academics •Consumers •Māori •Food Businesses (further classified as): o Farmers o Processors o Food retailers o Exporters Interviews with stakeholders from these groups highlighted twelve criteria as being relevant to multiple groups during food safety intervention evaluation: •Effectiveness •Financial cost •Market Access •Consumer Perceptions •Ease of Implementation •Quality or Suitability •Quality of Science •Equity of Costs •Equity of Benefits •Workplace Safety •Cultural Impact •Animal Welfare There are a number of different ways to measure or assess performance on these criteria. Some are able to be quantitatively measured, while others may require the use of value judgements. This thesis used the Disability-Adjusted Life Year (DALY) metric for quantifying effectiveness during the testing of different MCDA models. This thesis reviews the MCDA process and the food safety specific MCDA literature. There are different ways of conducting MCDA. In particular, there are a large number of models available for the aggregation phase; the process of converting model inputs, in the form of criteria scores and weights, into model recommendations. This thesis has described and reviewed the main classes of model. The literature review and interview process guided the construction and testing of three classes of MCDA model; the Weighted Sum, Analytic Hierarchy Process (AHP) and PROMETHEE models. These models were selected due to their having different characteristics and degrees of complexity, as well as their popularity in the food safety and Health Technology Assessment (HTA) literature. Models were tested on the problem of selecting the most appropriate intervention to address the historic Campylobacter in poultry problem in New Zealand during the mid-2000s. Experimentation was conducted on these models to explore how different configurations utilise data and produce model outputs. This experimentation included: •Varying the format of input data •Exploring the effects of including/excluding criteria •Methods for sensitivity analysis •Exploring how data inputs and outputs can be elicited and presented using visual tools • Creating and using hybrid MCDA models The results of this testing are a key output of this thesis and provide insight into how such models might be used in food safety decision-making. The conclusions reached throughout this research phase can be classified into one of two broad groups: •Those relating to MCDA as a holistic process/methodology for decision-making •Those relating to the specific models and mathematical procedures for generating numerical inputs and outputs This thesis demonstrates that food-safety decision-making is a true multi-criteria, multi-stakeholder problem. The different stakeholders in food-safety decision-making do not always agree on the value and importance of the attributes used to evaluate competing intervention schemes. MCDA is well suited to cope with such complexity as it provides a structured methodology for the systematic and explicit identification, recording and aggregation of qualitative and quantitative information, gathered from a number of different sources, with the output able to serve as a basis for decision-making. The MCDA models studied in this thesis range from models that are simple and quick to construct and use, to more time consuming models with sophisticated algorithms. The type of model used for MCDA, the way these models are configured and the way inputs are generated or elicited can have a significant impact on the results of an analysis. This thesis has identified a number of key methodological considerations for those looking to employ one of the many available MCDA models. These considerations include: •Whether a model can accommodate the type and format of input data •The desired degree of compensation between criteria (i.e. full, partial or no compensation) •Whether the goal of an analysis is the identification of a ‘best’ option(s), or the facilitation of discussion, and communication of data •The degree of transparency required from a model and whether an easily understood audit trail is desired/required •The desired output of a model (e.g. complete or partial ranking). This thesis has also identified a number of practical considerations when selecting which model to use in food safety decision-making. These include: •The amount of time and energy required of stakeholders in the generation of data inputs (elicitation burden) •The degree of training required for participants •How data inputs are to be elicited and aggregated in different group decision-making environments •The availability of MCDA software for assisting an analysis Considering the above points will assist users in selecting a suitable MCDA model that meets their requirements and constraints. This thesis provides original and practical knowledge to assist groups or individuals looking to employ MCDA in the context of food-safety intervention decision-making. This research could also serve as a guide for those looking to evaluate a different selection of MCDA models.
528

A Combinatorial Algorithm for Minimizing the Maximum Laplacian Eigenvalue of Weighted Bipartite Graphs

Helmberg, Christoph, Rocha, Israel, Schwerdtfeger, Uwe 13 November 2015 (has links) (PDF)
We give a strongly polynomial time combinatorial algorithm to minimise the largest eigenvalue of the weighted Laplacian of a bipartite graph. This is accomplished by solving the dual graph embedding problem which arises from a semidefinite programming formulation. In particular, the problem for trees can be solved in time cubic in the number of vertices.
529

Αποδοτική διαχείριση κειμενικής πληροφορίας, δεικτοδότηση, αποθήκευση, επεξεργασία και εφαρμογές

Θεοδωρίδης, Ευάγγελος 03 July 2009 (has links)
Βασική επιδίωξη της παρούσας διατριβής είναι η διερεύνηση των δυνατοτήτων του πεδίου της επιστήμης των υπολογιστών που πραγματεύεται την αποθήκευση και την επεξεργασία πληροφορίας, μέσα στο περιβάλλον που έχουν σχηματίσει οι σύγχρονες εφαρμογές. Τα τελευταία χρόνια, η πληροφορία που είναι διαθέσιμη σε ηλεκτρονική μορφή, έχει γιγαντωθεί με αποτέλεσμα να είναι αναγκαία η ανάπτυξη νέων τεχνικών για την αποτελεσματική αποθήκευση και επεξεργασία αυτής. Δύο πολύ χαρακτηριστικές και σημαντικές εφαρμογές, στις οποίες ανακύπτουν συνεχώς νέα προβλήματα, είναι η διαχείριση Βιολογικών δεδομένων, όπως π.χ. οι ακολουθίες γονιδιωμάτων, καθώς και η διαχείριση πληροφορίας από τον παγκόσμιο ιστό, όπως π.χ. τα έγγραφα HTML, XML ή οι συντομεύσεις (urls). Στόχος είναι ανάπτυξη δομών δεικτοδότησης πάνω στην πληροφορία έτσι ώστε τα σχετικά ερωτήματα με αυτή να απαντώνται αποδοτικά και πολύ πιο γρήγορα από το να ψάχναμε εκτενώς μέσα σε αυτή. Χαρακτηριστικά τέτοια ερωτήματα είναι η εύρεση προτύπων (pattern matching) ή ο εντοπισμός επαναλαμβανόμενων μοτίβων (motif extraction). Πιο συγκεκριμένα, τα ϑέματα στα οποία εστίασε η παρούσα διατριβή είναι τα ακόλουϑα: - Εντοπισμός Περιοδικοτήτων σε συμβολοσειρές. Στην ενότητα αυτή δίνεται μια σειρά από αλγόριθμους για την εξαγωγή περιοδικοτήτων από συμβολοσειρές. Δίνονται αλγόριθμοι για την εξαγωγή μέγιστων επαναλήψεων, της περιόδου του καλύμματος και της ρίζας μιας συμβολοσειράς. Οι αλγόριθμοι αυτοί χρησιμοποιούν ώς βάση το δένδρο επιθεμάτων και οι περισσότεροι από αυτούς είναι γραμμικοί. - Δεικτοδότηση Βεβαρημένων Ακολουθιών. Στην επόμενη ενότητα η μελέτη εστιάζει στην δεικτοδότηση βεβαρημένων ακολουθιών, καθώς και στην απάντηση ερωτημάτων σε αυτές όπως η εύρεση προτύπων, η εύρεση επαναλήψεων, η εύρεση καλυμμάτων, κ.α.. Οι βεβαρημένες ακολουθίες είναι ακολουθίες όπου σε κάθε ϑέση τους έχουμε εμφάνιση όλων των συμβόλων του αλφαβήτου της ακολουθίας, έχοντας λάβει ένα συγκεκριμένο βάρος. Οι βεβαρημένες ακολουθίες αναπαριστούν βιολογικές ακολουθίες είτε νουκλεοτιδίων είτε αμινοξέων και στην ουσία περιγράφουν την πιθανότητα εμφάνισης ενός συμβόλου του αλφαβήτου σε μια συγκεκριμένη ϑέση της ακολουθίας ή κάποιες συγκεκριμένες βιολογικές ιδιότητες που διαθέτουν οι ρυθμιστικές πρωτεΐνες σε κάθε ϑέση της ακολουθίας. Για την διαχείριση αυτών των ιδιόμορφων ακολουθιών προτείνεται ως δομή δεικτοδότησης το βεβαρημένο δένδρο επιθεμάτων (Weighted Suffix Tree), ένα δένδρο με παρόμοια δομικά χαρακτηριστικά με αυτά του γενικευμένου δένδρου επιθεμάτων. Στην παρούσα εργασία δίνεται ο ορισμός του βεβαρημένου δένδρου επιθεμάτων και αλγόριθμοι κατασκευής του σε γραμμικό χρόνο και χώρο. -Εξαγωγή μοτίβων από βεβαρημένες Ακολουθίες. Με την χρήση του βεβαρημένου δένδρου επιθεμάτων υλοποιούνται ένα σύνολο αλγόριθμων εξαγωγής επαναληπτικών δομών από βεβαρημένες ακολουθίες. Πιο συγκεκριμένα, δίνονται αλγόριθμοι για την εύρεση μέγιστων ευγών,επαναλαμβανόμενων μοτίβων και κοινών μοτίβων από περισσότερες της μίας βεβαρημένες ακολουθίες. - Αλγόριθμοι Σύστασης Σελίδων Παγκόσμιου Ιστού με χρήση τεχνικών επεξεργασίας συμβολοσειρών. Αρκετές εφαρμογές παγκόσμιου ιστού (συστήματα σύστασης ή συστήματα κρυφής μνήμης) προσπαθούν να προβλέψουν τις προθέσεις ενός επισκέπτη είτε για να του προτείνουν είτε για να προφορτώσουν μία σελίδα. Για το σκοπό αυτό προσπαθούν να εκμεταλλευτούν οποιαδήποτε εμπειρία που έχει καταγραφεί στο σύστημα από προηγούμενες προσπελάσεις. Προτείνεται νέος τρόπος δεικτοδότησης και αναπαράστασης της πληροφορίας που εξάγεται από τα διαθέσιμα δεδομένα, όπως οι προσβάσεις των χρηστών από τα logfilesκαι το περιεχόμενο των σελίδων. Για την εξόρυξη γνώσης από τα παραπάνω δεδομένα, αυτά αναπαριστώνται ως συμβολοσειρές και στη συνέχεια επεξεργάζονται και δεικτοδοτούνται από ένα γενικευμένο βεβαρημένο δένδρο επιθεμάτων. Το δένδρο αυτό συμπυκνώνει αποδοτικά τα πιο συχνά αλλά και πιο ουσιαστικά μοτίβα προσπελάσεων και χρησιμοποιείται, αφότου κατασκευαστεί, σαν ένα μοντέλο για την πρόβλεψη των κινήσεων τον επισκεπτών ενός ιστοτόπου. / The basic goal of this thesis is to explore the possibilities of the field of computer science that deals with storing and processing information in the environment that formed by the modern applications. In recent years, the information that is available in electronic form, has met an enormous growth. Thus it is necessary to develop new techniques for efficient storage and processing. Two very specific and important applications in which constantly new problems arise are, the management of biological data, such as genome sequences, and the management information from the Web, such as documents HTML, XML or shortcuts (urls). The objective is the development of data structures for indexing information so that the questions are able to be answered in less time than looking explicitly in information. Such questions are to find patterns (pattern matching) or the identification of repeated motifs (motif extraction). In particular, the issues on which this thesis has focused are: - Locating Periodicities in strings. This section provides a series of algorithms for the extraction of periodicities of strings. We propose algorithms for the extraction of maximum repetitions of the cover, period and the seed of a string. The algorithms used are based on suffix tree and they are optimal. - Weighted Sequences indexing. In the next section, the study focuses on indexing of weighted sequences, and to answer questions like finding models, pairs, covers etc. in them. The weighted sequences are sequences where each position consists of all the symbols of the alphabet in sequence, having each one a specific weight. For the management of these sequences a particular indexing structure is proposed with the name Weighted Suffix Tree, a tree with structural features similar to those of the generalized suffix tree. In this work we propose the definition of the weighted suffix tree and construction algorithms in linear time and memory space. With the utilization of weighted suffix tree on a set of weighted sequences we propose algorithms for extracting repetitive structures from a set of weighted sequences. More specifically, we propose algorithms for finding maximum pairs, repeated motifs and common patterns of more than one weighted sequences -Recommendation Algorithms for web pages using strings processing algorithms. Several web applications (Recommendation systems or cache systems) want to predict the intentions of a visitor in order to propose or to preload a webpage. For this purpose systems try to exploit any experience that is recorded in the system from previous accesses. A new method for indexing and representing of information extracted is proposed upon the recorder data, from the user accesses in log files and content pages. For extracting knowledge from these data, the information is represented as strings and then treated and processed as weighted sequences. All these sequences are indexed by a generalized weighted sequence tree.
530

Data-driven estimation for Aalen's additive risk model

Boruvka, Audrey 02 August 2007 (has links)
The proportional hazards model developed by Cox (1972) is by far the most widely used method for regression analysis of censored survival data. Application of the Cox model to more general event history data has become possible through extensions using counting process theory (e.g., Andersen and Borgan (1985), Therneau and Grambsch (2000)). With its development based entirely on counting processes, Aalen’s additive risk model offers a flexible, nonparametric alternative. Ordinary least squares, weighted least squares and ridge regression have been proposed in the literature as estimation schemes for Aalen’s model (Aalen (1989), Huffer and McKeague (1991), Aalen et al. (2004)). This thesis develops data-driven parameter selection criteria for the weighted least squares and ridge estimators. Using simulated survival data, these new methods are evaluated against existing approaches. A survey of the literature on the additive risk model and a demonstration of its application to real data sets are also provided. / Thesis (Master, Mathematics & Statistics) -- Queen's University, 2007-07-18 22:13:13.243

Page generated in 0.0254 seconds