• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2596
  • 912
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 31
  • 27
  • 26
  • Tagged with
  • 5940
  • 1422
  • 871
  • 726
  • 722
  • 669
  • 492
  • 490
  • 479
  • 447
  • 421
  • 414
  • 386
  • 365
  • 340
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
851

Routing Strategies for Multihop Wireless Relaying Networks

Babaee, Ramin 06 1900 (has links)
Multihop routing is an effective method for establishing connectivity between the nodes of a network. End-to-end outage probability and total power consumption are applied as the optimization criteria for routing protocol design in multihop networks based on the local channel state information measurement at the nodes of a network. The analysis shows that employing instantaneous channel state information in routing design results in significant performance improvement of multihop communication, e.g., achieving full diversity order when the optimization criterion is outage performance. The routing metrics derived from the optimization problems cannot be optimized in a distributed manner. Establishing an alternate framework, the metrics obtained are converted into new composite metrics, which satisfy the optimality and convergence requirements for implementation in distributed environments. The analysis shows that the running time of the proposed distributed algorithm is bounded by a polynomial. / Communications
852

A data clustering algorithm for stratified data partitioning in artificial neural network

Sahoo, Ajit Kumar 06 1900 (has links)
The statistical properties of training, validation and test data play an important role in assuring optimal performance in artificial neural networks (ANN). Re-searchers have proposed randomized data partitioning (RDP) and stratified data partitioning (SDP) methods for partition of input data into training, vali-dation and test datasets. RDP methods based on genetic algorithm (GA) are computationally expensive as the random search space can be in the power of twenty or more for an average sized dataset. For SDP methods, clustering al-gorithms such as self organizing map (SOM) and fuzzy clustering (FC) are used to form strata. It is assumed that data points in any individual stratum are in close statistical agreement. Reported clustering algorithms are designed to form natural clusters. In the case of large multivariate datasets, some of these natural clusters can be big enough such that the furthest data vectors are statis-tically far away from the mean. Further, these algorithms are computationally expensive as well. Here a custom design clustering algorithm (CDCA) has been proposed to overcome these shortcomings. Comparisons have been made using three benchmark case studies, one each from classification, function ap-proximation and prediction domain respectively. The proposed CDCA data partitioning method was evaluated in comparison with SOM, FC and GA based data partitioning methods. It was found that the CDCA data partitioning method not only performed well but also reduced the average CPU time. / Engineering Management
853

Resource Allocation for OFDMA-based multicast wireless systems

Ngo, Duy Trong 11 1900 (has links)
Regarding the problems of resource allocation in OFDMA-based wireless communication systems, much of the research effort mainly focuses on finding efficient power control and subcarrier assignment policies. With systems employing multicast transmission, the available schemes in literature are not always applicable. Moreover, the existing approaches are particularly inaccessible in practical systems in which there are a large number of OFDM subcarriers being utilized, as the required computational burden is prohibitively high. The ultimate goal of this research is therefore to propose affordable mechanisms to flexibly and effectively share out the available resources in multicast wireless systems deploying OFDMA technology. Specifically, we study the resource distribution problems in both conventional and cognitive radio network settings, formulating the design problems as mathematical optimization programs, and then offering the solution methods. Suboptimal and optimal schemes with high performance and yet of acceptable complexity are devised through the application of various mathematical optimization tools such as genetic algorithm and Lagrangian dual optimization. The novelties of the proposed approaches are confirmed, and their performances are verified by computer simulation with the presentation of numerical examples to support the findings. / Communications
854

The Prouhet-Tarry-Escott problem

Caley, Timothy January 2012 (has links)
Given natural numbers n and k, with n>k, the Prouhet-Tarry-Escott (PTE) problem asks for distinct subsets of Z, say X={x_1,...,x_n} and Y={y_1,...,y_n}, such that x_1^i+...+x_n^i=y_1^i+...+y_n^i\] for i=1,...,k. Many partial solutions to this problem were found in the late 19th century and early 20th century. When k=n-1, we call a solution X=(n-1)Y ideal. This is considered to be the most interesting case. Ideal solutions have been found using elementary methods, elliptic curves, and computational techniques. This thesis focuses on the ideal case. We extend the framework of the problem to number fields, and prove generalizations of results from the literature. This information is used along with computational techniques to find ideal solutions to the PTE problem in the Gaussian integers. We also extend a computation from the literature and find new lower bounds for the constant C_n associated to ideal PTE solutions. Further, we present a new algorithm that determines whether an ideal PTE solution with a particular constant exists. This algorithm improves the upper bounds for C_n and in fact, completely determines the value of C_6. We also examine the connection between elliptic curves and ideal PTE solutions. We use quadratic twists of curves that appear in the literature to find ideal PTE solutions over number fields.
855

Computer-Enhanced Knowledge Discovery in Environmental Science

Fukuda, Kyoko January 2009 (has links)
Encouraging the use of computer algorithms by developing new algorithms and introducing uncommonly known algorithms for use on environmental science problems is a significant contribution, as it provides knowledge discovery tools to extract new aspects of results and draw new insights, additional to those from general statistical methods. Conducting analysis with appropriately chosen methods, in terms of quality of performance and results, computation time, flexibility and applicability to data of various natures, will help decision making in the policy development and management process for environmental studies. This thesis has three fundamental aims and motivations. Firstly, to develop a flexibly applicable attribute selection method, Tree Node Selection (TNS), and a decision tree assessment tool, Tree Node Selection for assessing decision tree structure (TNS-A), both of which use decision trees pre-generated by the widely used C4.5 decision tree algorithm as their information source, to identify important attributes from data. TNS helps the cost effective and efficient data collection and policy making process by selecting fewer, but important, attributes, and TNS-A provides a tool to assess the decision tree structure to extract information on the relationship of attributes and decisions. Secondly, to introduce the use of new, theoretical or unknown computer algorithms, such as the K-Maximum Subarray Algorithm (K-MSA) and Ant-Miner, by adjusting and maximizing their applicability and practicality to assess environmental science problems to bring new insights. Additionally, the unique advanced statistical and mathematical method, Singular Spectrum Analysis (SSA), is demonstrated as a data pre-processing method to help improve C4.5 results on noisy measurements. Thirdly, to promote, encourage and motivate environmental scientists to use ideas and methods developed in this thesis. The methods were tested with benchmark data and various real environmental science problems: sea container contamination, the Weed Risk Assessment model and weed spatial analysis for New Zealand Biosecurity, air pollution, climate and health, and defoliation imagery. The outcome of this thesis will be to introduce the concept and technique of data mining, a process of knowledge discovery from databases, to environmental science researchers in New Zealand and overseas by collaborating on future research to achieve, together with future policy and management, to maintain and sustain a healthy environment to live in.
856

Εφαρμογή του Clean αλγόριθμου σε συνδυασμό με την παραγοντική ανάλυση για τη μελέτη χρονοσειρών παραμέτρων βιολογικού καθαρισμού βιομηχανίας

Βαλλιανάτου, Σπυριδούλα 17 September 2012 (has links)
Στόχος της παρούσας εργασίας είναι η ανάπτυξη ενός μεθοδολογικού σχήματος που συνίσταται από τον CLEAN αλγόριθμο, και την παραγοντική ανάλυση, για τη μελετη χρονοσειρων παραμετρων βιολογικου καθαρισμου βιομηχανιας. Ο CLEAN αλγόριθμος χρησιμοποιείται για την πλήρωση κενών στα αρχικά δεδομένα ώστε να προκύψουν συνεχείς χρονοσειρές, ενώ η παραγοντική ανάλυση ανιχνεύει τις σχέσεις μεταξύ των αρχικών μεταβλητών. Η ανάπλαση των χρονοσειρών αναδεικνύει τις σημαντικότερες πληροφορίες όσον αφορά τη χρονική διακύμανση των κυρίαρχων διεργασιών του βιολογικού καθαρισμού – τα αποτελέσματα ενισχύονται με την εφαρμογή της παραγοντικής ανάλυσης. Η παρούσα εργασία προτείνει ένα σχετικά απλό και αποτελεσματικό μεθοδολογικό σχήμα για τη καλύτερη παρακολούθηση της λειτουργίας μονάδων βιολογικού καθαρισμού. Αναλύθηκαν και μελετήθηκαν δεκατρείς παράμετροι, οι οποίες συλλέχθηκαν σε περίοδο οχτώ ετών. Με την εφαρμογή της CLEAN φασματικής ανάλυσης και της παραγοντικής ανάλυσης επεξεργάστηκαν συνολικά 37.960 δεδομένα. Αρχικά, ο CLEAN αλγόριθμος λειτούργησε με μεγάλη επιτυχία και συμπλήρωσε όλα τα κενά των αρχικών ανεπεξέργαστων δεδομένων και στη συνέχεια με την εφαρμογή της παραγοντικής ανάλυσης φάνηκε ότι επτά διεργασίες ελέγχουν διαχρονικά τη λειτουργία του βιολογικού καθαρισμού. Η CLEAN φασματική ανάλυση σε συνδυασμό με την πολυδιάστατη στατιστική παραγοντική ανάλυση αποτελούν ένα αποτελεσματικό και εξαιρετικά χρήσιμο εργαλείο για την μελέτη περιβαλλοντικών δεδομένων. / The scope of the study is the development of a methodology based on CLEAN algorithm and factor analysis for the study of the time series of parameters from a waste water treatment unit. CLEAN algorithm is used to cover missing data and measurements in order to create a time series without gaps, while factor analysis is used to detect relations and connection between the different parameters. Filling in the gaps in the time series, points out the most important information about frequency and repeatability of the basic procedures in the waste water treatment – those results are further supported by application of the factor analysis. This study suggests a simple and efficient approach for the supervision of waste water treatment units. Thirteen parameters, recorded for an eight years period were analyzed using the suggested methodology. 37.960 data in total were analyzed and processed. CLEAN algorithm was successfully applied to fill the gaps in the data. Factor analysis was used to point out 7 procedures as the key – factors controlling the waste water treatment operation for the whole time period. CLEAN algorithm in combination with the multiparameter statistical factor analysis compose a useful and effective tool for the study of complex environmental systems, like waste water units.
857

Τεχνικές επεξεργασίας αραιών σημάτων και εφαρμογές σε προβλήματα τηλεπικοινωνιών

Μπερμπερίδης, Δημήτρης 08 January 2013 (has links)
Η παρούσα εργασία χωρίζεται σε δύο μέρη. Στο πρώτο μέρος μελετάμε το αντικείμενο της Συμπιεσμένης καταγραφής. Το κείμενο εστιάζει στα βασικότερα σημεία της θεωρίας γύρω από την ανακατασκευή αραιών σημάτων από λίγες μετρήσεις, ενώ γίνεται και μία ανασκόπηση των τεχνικών ανακατασκευής. Στο δεύτερο μέρος παρουσιάζονται τα αποτελέσματα της ερευνητικής προσπάθειας γύρω από συγκεκριμένα προβλήματα ανακατασκευής. / -
858

Robust sampling-based conflict resolution for commercial aircraft in airport environments

Van den Aardweg, William 03 1900 (has links)
Thesis (MEng)--Stellenbosch University, 2015. / ENGLISH ABSTRACT: This thesis presents a robust, sampling-based path planning algorithm for commercial airliners that simultaneously performs collision avoidance both with intruder aircraft and terrain. The existing resolution systems implemented on commercial airliners are fast and reliable; however, they do possess certain limitations. This thesis aims to propose an algorithm that is capable of rectifying some of these limitations. The development and research required to derive this conflict resolution system is supplied in the document, including a detailed literature study explaining the selection of the final algorithm. The proposed algorithm applies an incremental sampling-based technique to determine a safe path quickly and reliably. The algorithm makes use of a local planning method to ensure that the paths proposed by the system are indeed flyable. Additional search optimisation techniques are implemented to reduce the computational complexity of the algorithm. As the number of samples increases, the algorithm strives towards an optimal solution; thereby deriving a safe, near-optimal path that avoids the predicted conflict region. The development and justification of the different methods used to adapt the basic algorithm for the application as a confiict resolution system are described in depth. The final system is simulated using a simplified aircraft model. The simulation results show that the proposed algorithm is able to successfully resolve various conflict scenarios, including the generic two aircraft scenario, terrain only scenario, a two aircraft with terrain scenario and a multiple aircraft and terrain scenario. The developed algorithm is tested in cluttered dynamic environments to ensure that it is capable of dealing with airport scenarios. A statistical analysis of the simulation results shows that the algorithm finds an initial resolution path quickly and reliably, while utilising all additional computation time to strive towards a near-optimal solution. / AFRIKAANSE OPSOMMING: Hierdie tesis bied 'n robuuste, monster-gebaseerde roetebeplanningsalgoritme vir kommersiële vliegtuie aan, wat botsingvermyding met indringervliegtuie en met die terrein gelyktydig uitvoer. Die bestaande konflikvermyding- stelsels wat op kommersiële vliegtuie geïmplementeer word, is vinnig en betroubaar; dit het egter ook sekere tekortkominge. Hierdie tesis is daarop gemik om 'n algoritme voor te stel wat in staat is om sommige van hierdie tekortkominge reg te stel. Die ontwikkeling en navorsing wat nodig was om hierdie konflik-vermyding-algoritme af te lei, word in die dokument voorgelê, insluitende 'n gedetailleerde literatuurstudie wat die keuse van die finale algoritme verduidelik. Die voorgestelde algoritme pas 'n inkrementele, monster-gebaseerde tegniek toe om vinnig en betroubaar 'n veilige roete te bepaal. Die algoritme maak gebruik van 'n lokale beplanningsmetode om te verseker dat die roetes wat die stelsel voorstel inderdaad uitvoerbaar is. Aanvullende soektog-optimeringstegnieke word geïmplementeer om die berekeningskompleksiteit van die algoritme te verlaag. Soos die aantal monsters toeneem, streef die algoritme na 'n optimale oplossing; sodoende herlei dit na 'n veilige, byna-optimale roete wat die voorspelde konflikgebied vermy. Die ontwikkeling en regverdiging van die verskillende metodes wat gebruik is om die basiese algoritme aan te pas vir die toepassing daarvan as 'n konflik-vermyding-stelsels word in diepte beskryf. Die finale stelsel word gesimuleer deur 'n vereenvoudigde vliegtuigmodel te gebruik. Die simulasie resultate dui daarop dat die voorgestelde algoritme verskeie konflikscenario's suksesvol kan oplos, insluitend die generiese tweevliegtuigscenario, die slegs-terreinscenario, die tweevliegtuig-met-terreinscenario en die veelvuldige vliegtuig-enterreinscenario. Die ontwikkelde algoritme is in 'n beisge (cluttered), dinamiese omgewing getoets om te verseker dat dit 'n besige lughawescenario kan hanteer. 'n Statistiese ontleding van die simulasie resultate bewys dat die algoritme vinnig en betroubaar 'n aanvanklike oplossingspad kan vind, addisioneel word die oorblywende berekeningstyd ook gebruik om na 'n byna optimaleoplossing te streef.
859

De novo sequencing of heparan sulfate saccharides using high-resolution tandem mass spectrometry

Hu, Han 12 March 2016 (has links)
Heparan sulfate (HS) is a class of linear, sulfated polysaccharides located on cell surface, secretory granules, and in extracellular matrices found in all animal organ systems. It consists of alternately repeating disaccharide units, expressed in animal species ranging from hydra to higher vertebrates including humans. HS binds and mediates the biological activities of over 300 proteins, including growth factors, enzymes, chemokines, cytokines, adhesion and structural proteins, lipoproteins and amyloid proteins. The binding events largely depend on the fine structure - the arrangement of sulfate groups and other variations - on HS chains. With the activated electron dissociation (ExD) high-resolution tandem mass spectrometry technique, researchers acquire rich structural information about the HS molecule. Using this technique, covalent bonds of the HS oligosaccharide ions are dissociated in the mass spectrometer. However, this information is complex, owing to the large number of product ions, and contains a degree of ambiguity due to the overlapping of product ion masses and lability of sulfate groups; as a result, there is a serious barrier to manual interpretation of the spectra. The interpretation of such data creates a serious bottleneck to the understanding of the biological roles of HS. In order to solve this problem, I designed HS-SEQ - the first HS sequencing algorithm using high-resolution tandem mass spectrometry. HS-SEQ allows rapid and confident sequencing of HS chains from millions of candidate structures and I validated its performance using multiple known pure standards. In many cases, HS oligosaccharides exist as mixtures of sulfation positional isomers. I therefore designed MULTI-HS-SEQ, an extended version of HS-SEQ targeting spectra coming from more than one HS sequence. I also developed several pre-processing and post-processing modules to support the automatic identification of HS structure. These methods and tools demonstrated the capacity for large-scale HS sequencing, which should contribute to clarifying the rich information encoded by HS chains as well as developing tailored HS drugs to target a wide spectrum of diseases.
860

Applying high performance computing to profitability and solvency calculations for life assurance contracts

Tucker, Mark January 2018 (has links)
Throughout Europe, the introduction of Solvency II is forcing companies in the life assurance and pensions provision markets to change how they estimate their liabilities. Historically, each solvency assessment required that the estimation of liabilities was performed once, using actuaries' views of economic and demographic trends. Solvency II requires that each assessment of solvency implies a 1-in-200 chance of not being able to meet the liabilities. The underlying stochastic nature of these requirements has introduced significant challenges if the required calculations are to be performed correctly, without resorting to excessive approximations, within practical timescales. Currently, practitioners within UK pension provision companies consider the calculations required to meet new regulations to be outside the realms of anything which is achievable. This project brings the calculations within reach: this thesis shows that it is possible to perform the required calculations in manageable time scales, using entirely reasonable quantities of hardware. This is achieved through the use of several techniques: firstly, a new algorithm has been developed which reduces the computational complexity of the reserving algorithm from O(T2) to O(T) for T projection steps, and is sufficiently general to be applicable to a wide range of non unit-linked policies; secondly, efficient ab-initio code, which may be tuned to optimise its performance on many current architectures, has been written; thirdly, approximations which do not change the result by a significant amount have been introduced; and, finally, high performance computers have been used to run the code. This project demonstrates that the calculations can be completed in under three minutes when using 12,000 cores of a supercomputer, or in under eight hours when using 80 cores of a moderately sized cluster.

Page generated in 0.0733 seconds