• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 758
  • 470
  • 186
  • 86
  • 73
  • 26
  • 24
  • 23
  • 22
  • 21
  • 16
  • 16
  • 11
  • 10
  • 10
  • Tagged with
  • 2022
  • 621
  • 259
  • 211
  • 197
  • 172
  • 166
  • 152
  • 146
  • 140
  • 139
  • 138
  • 137
  • 126
  • 125
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Efficient Algorithms for the Maximum Convex Sum Problem

Thaher, Mohammed Shaban Atieh January 2014 (has links)
This research is designed to develop and investigate newly defined problems: the Maximum Convex Sum (MCS), and its generalisation, the K-Maximum Convex Sum (K-MCS), in a two-dimensional (2D) array based on dynamic programming. The study centres on the concept of finding the most useful informative array portion as defined by different parameters involved in data, which is generically expressed in this thesis as the Maximum Sum Problem (MSP). This concept originates in the Maximum Sub-Array (MSA) problem, which relies on rectangular regions to find the informative array portion. From the above it follows that MSA and MCS belong to MSP. This research takes a new stand in using an alternative shape in the MSP context, which is the convex shape. Since 1977, there has been substantial research in the development of the Maximum Sub-Array (MSA) problem to find informative sub-array portions, running in the best possible time complexity. Conventionally the research norm has been to use the rectangular shape in the MSA framework without any investigation into an alternative shape for the MSP. Theoretically there are shapes that can improve the MSP outcome and their utility in applications; research has rarely discussed this. To advocate the use of a different shape in the MSP context requires rigorous investigation and also the creation of a platform to launch a new exploratory research area. This can then be developed further by considering the implications and practicality of the new approach. This thesis strives to open up a new research frontier based on using the convex shape in the MSP context. This research defines the new MCS problem in 2D; develops and evaluates algorithms that serve the MCS problem running in the best possible time complexity; incorporates techniques to advance the MCS algorithms; generalises the MCS problem to cover the K-Disjoint Maximum Convex Sums (K-DMCS) problem and the K-Overlapping Maximum Convex Sums (K-OMCS) problem; and eventually implements the MCS algorithmic framework using real data in an ecology application. Thus, this thesis provides a theoretical and practical framework that scientifically contributes to addressing some of the research gaps in the MSP and the new research path: the MCS problem. The MCS and K-MCS algorithmic models depart from using the rectangular shape as in MSA, and retain a time complexity that is within the best known time complexities of the MSA algorithms. Future in-depth studies on the Maximum Convex Sum (MCS) problem can advance the algorithms developed in this thesis and their time complexity.
22

Sub-cubic Time Algorithm for the k-disjoint Maximum subarray Problem

Lee, Sang Myung (Chris) January 2011 (has links)
The maximum subarray problem is to find the array portion that maximizes the sum of array elements in it. This problem was first introduced by Grenander and brought to computer science by Bentley in 1984. This problem has been branched out into other problems based on their characteristics. k-overlapping maximum subarray problem where the overlapping solutions are allowed, and k-disjoint maximum subarray problem where all the solutions are disjoint from each other are those. For k-overlapping maximum subarray problems, significant improvement have been made since the problem was first introduced. For k-disjoint maximum subarrsy, Ruzzo and Tompa gave an O(n) time solution for one-dimension. This solution is, however, difficult to extend to two-dimensions. While a trivial solution of O(kn^3) time is easily obtainable for two-dimensions, little study has been undertaken to better this. This paper introduces a faster algorithm for the k-disjoint maximum sub-array problem under the conventional RAM model, based on distance matrix multiplication. Also, DMM reuse technique is introduced for the maximum subarray problem based on recursion for space optimization.
23

Development and applications of high performance computing

Cox, Simon J. January 1998 (has links)
No description available.
24

Un principe du maximum en théorie des fonctions pour des domaines non-bornés

Piché, Richard January 2006 (has links)
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
25

Das Optionswertmodell zur Erklärung der Rentenentscheidung / The Retirement Decision According To The Option Value Model

Kempf, Stefan January 2007 (has links) (PDF)
Diese empirische Arbeit untersucht Determinanten des Renteneintritts. Sie basiert auf einem Optionswertmodell, um die Bedeutung finanzieller Überlegungen für ein Aufschieben des Renteneintritts zu analysieren. Zusätzlich wird der Einfluss institutioneller Rahmenbedingungen betrachtet. Ein neu verfügbarer Datensatz des Verbands Deutscher Rentenversicherungsträger wird dazu verwendet. Die Ergebnisse zeigen, dass Arbeitslosigkeit und Krankheit zu einem großen Teil einen frühen Renteneintritt erklären. Zusätzlich hat der Optionswert einen großen Erklärungsgehalt. / This paper empirically investigates determinants of retirement decisions. It is based on the option value approach to assess the importance of financial considerations for delaying immediate retirement. In addition, the impact of institutional conditions is considered. Newly available data from the data base of the statutory pension organization providing exact information about income, pension claims, and unemployment spells is used. The results indicate that unemployment and illness explain a great portion of early retirements. Additionally, the option value has explanatory power.
26

Maximum Likelihood Estimation of Logistic Sinusoidal Regression Models

Weng, Yu 12 1900 (has links)
We consider the problem of maximum likelihood estimation of logistic sinusoidal regression models and develop some asymptotic theory including the consistency and joint rates of convergence for the maximum likelihood estimators. The key techniques build upon a synthesis of the results of Walker and Song and Li for the widely studied sinusoidal regression model and on making a connection to a result of Radchenko. Monte Carlo simulations are also presented to demonstrate the finite-sample performance of the estimators
27

Seismotectonic models, earthquake recurrence and maximum possible earthquake magnitudes for South Africa

Bejaichund, Mayshree 31 March 2011 (has links)
No description available.
28

Mathematical modeling of the transmission dynamics of malaria in South Sudan

Mukhtar, Abdulaziz Yagoub Abdelrahman January 2019 (has links)
Philosophiae Doctor - PhD / Malaria is a common infection in tropical areas, transmitted between humans through female anopheles mosquito bites as it seeks blood meal to carry out egg production. The infection forms a direct threat to the lives of many people in South Sudan. Reports show that malaria caused a large proportion of morbidity and mortality in the fledgling nation, accounting for 20% to 40% morbidity and 20% to 25% mortality, with the majority of the affected people being children and pregnant mothers. In this thesis, we construct and analyze mathematical models for malaria transmission in South Sudan context incorporating national malaria control strategic plan. In addition, we investigate important factors such as climatic conditions and population mobility that may drive malaria in South Sudan. Furthermore, we study a stochastic version of the deterministic model by introducing a white noise.
29

Caractérisation stochastique des sprays ultrasoniques : le formalisme de l'entropie maximale

Dobre, Miruna 09 May 2003 (has links)
Développer une méthode de caractérisation théorique complète d'un spray sur base de la connaissance du mécanisme de formation des gouttes et pouvant être appliquée de façon similaire quel que soit le type de spray, constitue l'axe central de la présente recherche. La difficulté principale étant la connaissance de la physique de rupture de la nappe liquide en gouttelettes, l'étude entreprise s'est attachée à la description du spray ultrasonique, qui a l'avantage d'impliquer un mécanisme de formation d'ondes de surface (ondes de Faraday) largement étudié. Les moyens mis en oeuvre pour trouver la loi de distribution théorique qui décrit au mieux la pulvérisation ultrasonique sont, d'un côté, l'analyse de l'instabilité des ondes de surface, qui permet de déterminer les caractéristiques moyennes du spray, et de l'autre, une méthode stochastique, le formalisme de l'entropie maximale, qui fournit la distribution la plus probable basée sur les caractéristiques moyennes et sur les lois de conservation élémentaires applicables à tout type de pulvérisation (conservation de la masse et de l'énergie). La validation expérimentale de cette nouvelle approche théorique a permis en outre de développer de nouveaux designs de pulvérisateurs performants.// To develop a method of complete theoretical characterization of a spray based on the knowledge of the of droplet formation mechanism and being able to be applied in a similar way whatever the type of spray, constitute the central axis of this research. The main difficulty being the knowledge of the physics of liquid film break-up into droplets, the study undertaken was concerned with the description of the ultrasonic spray, which has the advantage of implying a mechanism of formation of surface waves (Faraday waves) largely studied. The means implemented to find the theoretical droplet size distribution which describes ultrasonic atomization as well as possible are, first, analysis of surface waves instability, which allows to determine the average characteristics of the spray, and then, a stochastic method, the maximum entropy formalism, which provides the most probable distribution based on the average characteristics and the elementary laws of conservation applicable to any type of atomization (mass and energy conservation). The experimental validation of this new theoretical approach made it possible moreover to develop new designs of powerful ultrasonic atomizers.
30

Novel Turbo Equalization Methods for the Magnetic Recording Channel

Chesnutt, Elizabeth 12 April 2005 (has links)
Novel Turbo Equalization Methods for the Magnetic Recording Channel Elizabeth Chesnutt 95 Pages Directed by Dr. John R. Barry The topic of this dissertation is the derivation, development, and evaluation of novel turbo equalization techniques that address the colored noise problem on the magnetic recording channel. One new algorithm presented is the noise-predictive BCJR, which is a soft-output detection strategy that mitigates colored noise in partial-response equalized magnetic recording channels. This algorithm can be viewed as a combination of the traditional BCJR algorithm with the notion of survivors and noise prediction. Additionally, an alternative equalization architecture for magnetic recording is presented that addresses the shortcomings of the PRML approach, which dominates magnetic recording. Specifically, trellis-based equalizers are abandoned in favor of simple equalization strategies based on nonlinear filters whose complexity grows only linearly with their length. This research focuses on the linear-complexity SFE algorithm and on investigating the possibility of lowering the SFE filter calculation complexity. The results indicate that with using the proposed novel SFE method, it is possible to increase the information density on magnetic media without raising the complexity. The most important result presented is that partial-response equalization needs to be reconsidered because of the amount of noise enhancement problems that it adds to the overall system. These results are important for the magnetic recording industry, which is trying to attain a 1 Tb/cm2 information storage goal.

Page generated in 0.0475 seconds