• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 759
  • 470
  • 186
  • 86
  • 73
  • 26
  • 24
  • 23
  • 22
  • 21
  • 16
  • 16
  • 11
  • 10
  • 10
  • Tagged with
  • 2023
  • 621
  • 259
  • 212
  • 197
  • 172
  • 166
  • 153
  • 147
  • 140
  • 139
  • 138
  • 137
  • 126
  • 125
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

The Maximum Principle for Cauchy-Riemann Functions and Hypocomplexity

Daghighi, Abtin January 2012 (has links)
This licentiate thesis contains results on the maximum principle forCauchy–Riemann functions (CR functions) on weakly 1-concave CRmanifolds and hypocomplexity of locally integrable structures. Themaximum principle does not hold true in general for smooth CR functions,and basic counterexamples can be constructed in the presenceof strictly pseudoconvex points. We prove a maximum principle forcontinuous CR functions on smooth weakly 1-concave CR submanifolds.Because weak 1-concavity is also necessary for the maximumprinciple, a consequence is that a smooth generic CR submanifold ofCn obeys the maximum principle for continuous CR functions if andonly if it is weakly 1-concave. The proof is then generalized to embeddedweakly p-concave CR submanifolds of p-complete complexmanifolds. The second part concerns hypocomplexity and hypoanalyticstructures. We give a generalization of a known result regardingautomatic smoothness of solutions to the homogeneous problemfor the tangential CR vector fields given local holomorphic extension.This generalization ensures that a given locally integrable structureis hypocomplex at the origin if and only if it does not allow solutionsnear the origin which cannot be represented by a smooth function nearthe origin. / Uppsatsen innehåller resultat om maximumprincipen för kontinuerligaCauchy–Riemann funktioner (CR-funktioner) på svagt 1-konkava CRmångfalder,samt hypokomplexitet för lokalt integrerbara strukturer.Maximumprincipen gäller inte generellt för släta CR funktioner ochmotexempel kan konstrueras givet strängt pseudokonvexa punkter.Vi bevisar en maximumprincip för kontinuerliga CR-funktioner påsläta inbäddade svagt 1-konkava CR-mångfalder. Eftersom svagt 1-konkavitet också är nödvändigt får vi som konsekvens att för slätageneriska inbäddade CR-mångfalder i Cn gäller att maximum-principenför kontinuerliga CR-funktioner håller om och endast om CR-mångfaldenär svagt 1-konkav. Vi generaliserar satsen till svagt p-konkava CRmångfalderi p-kompletta mångfalder. Den andra delen behandlarhypokomplexitet och hypoanalytiska strukturer. Vi generaliserar enkänd sats om automatisk släthet för lösningar till de tangentiella CRekvationerna,givet existensen av lokal holomorf utvidgning. Generaliseringenger att en lokalt integrerbar struktur är hypokomplex iorigo om och endast om den inte tillåter lösningar nära origo som inteär släta nära origo. / <p>Forskning finansierad av Forskarskolan i Matematik och Beräkningsvetenskap (FMB), baserad i Uppsala.</p>
102

A method to evaluate environmental enrichments for Asian elephants (Elephas maximus) in zoos

Holmgren, Mary January 2007 (has links)
Environmental enrichment (EE) is used to improve the life of captive animals by giving them more opportunities to express species-specific behaviours. Zoo elephants are one of the species that is in great need of EE because their environment is often barren. Before making EE permanent, however, it is wise to test first if it works as intended, to save time and money. Maximum price paid is one measure that can be used to assess if an animal has any interest in a resource at all. Food is often used as a comparator against EEs in these kinds of studies. The aim was to investigate if the maximum price paid concept could be used to measure the value of EEs for the two female Asian elephants at Kolmården and to find an operant test suitable for them for the experimental trials. Three series of food trials were done with each elephant, where they had to lift weights by pulling a rope with their mouth to get access to 5kg hay. The elephants paid a maximum price of 372 and 227kg, respectively. However, the maximum price the elephants paid for access to the hay was not stable across the three series of trials. Hence it is recommended that the comparator trials are repeated close in time to the EEs to be tested. The readiness by which these elephants performed the task makes it worthwhile to further pursue this approach as one of the means to improve the well-being of zoo elephants.
103

A Strategy for Earthquake Catalog Relocations Using a Maximum Likelihood Method

Li, Ka Lok January 2012 (has links)
A strategy for relocating earthquakes in a catalog is presented. The strategy is based on the argument that the distribution of the earthquake events in a catalog is reasonable a priori information for earthquake relocation in that region. This argument can be implemented using the method of maximum likelihood for arrival time data inversion, where the a priori probability distribution of the event locations is defined as the sum of the probability densities of all events in the catalog. This a priori distribution is then added to the standard misfit criterion in earthquake location to form the likelihood function. The probability density of an event in the catalog is described by a Gaussian probability density. The a priori probability distribution is, therefore, defined as the normalized sum of the Gaussian probability densities of all events in the catalog, excluding the event being relocated. For a linear problem, the likelihood function can be approximated by the joint probability density of the a priori distribution and the distribution of an unconstrained location due to the misfit alone. After relocating the events according to the maximum of the likelihood function, a modified distribution of events is generated. This distribution should be more densely clustered than before in general since the events are moved towards the maximum of the posterior distribution. The a priori distribution is updated and the process is iterated. The strategy is applied to the aftershock sequence in southwest Iceland after a pair of earthquakes on 29th May 2008. The relocated events reveal the fault systems in that area. Three synthetic data sets are used to test the general behaviour of the strategy. It is observed that the synthetic data give significantly different behaviour from the real data.
104

A Novel Sensorless Support Vector Regression Based Multi-Stage Algorithm to Track the Maximum Power Point for Photovoltaic Systems

Ibrahim, Ahmad Osman January 2012 (has links)
Solar energy is the energy derived from the sun through the form of solar radiation. Solar powered electrical generation relies on photovoltaic (PV) systems and heat engines. These two technologies are widely used today to provide power to either standalone loads or for connection to the power system grid. Maximum power point tracking (MPPT) is an essential part of a PV system. This is needed in order to extract maximum power output from a PV array under varying atmospheric conditions to maximize the return on initial investments. As such, many MPPT methods have been developed and implemented including perturb and observe (P&O), incremental conductance (IC) and Neural Network (NN) based algorithms. Judging between these techniques is based on their speed of locating the maximum power point (MPP) of a PV array under given atmospheric conditions, besides the cost and complexity of implementing them. The P&O and IC algorithms have a low implementation complexity but their tracking speed is sluggish. NN based techniques are faster than P&O and IC. However, they may not provide the global optimal point since they are prone to multiple local minima. To overcome the demerits of the aforementioned methods, support vector regression (SVR) based strategies have been proposed for the estimation of solar irradiation (for MPPT). A significant advantage of SVR based strategies is that it can provide the global optimal point, unlike NN based methods. In the published literature of SVR based MPPT algorithms, however, researchers have assumed a constant temperature. The assumption is not plausible in practice as the temperature can vary significantly during the day. The temperature variation, in turn, can remarkably affect the effectiveness of the MPPT process; the inclusion of temperature measurements in the process will add to the cost and complexity of the overall PV system, and it will also reduce the reliability of the system. The main goal of this thesis is to present a novel sensorless SVR based multi-stage algorithm (MSA) for MPPT in PV systems. The proposed algorithm avoids outdoor irradiation and temperature sensors. The proposed MSA consists of three stages: The first stage estimates the initial values of irradiation and temperature; the second stage instantaneously estimates the irradiation with the assumption that the temperature is constant over one-hour time intervals; the third stage updates the estimated value of the temperature once every one hour. After estimating the irradiation and temperature, the voltage corresponding to the MPP is estimated, as well. Then, the reference PV voltage is given to the power electronics interface. The proposed strategy is robust to rapid changes in solar irradiation and load, and it is also insensitive to ambient temperature variations. Simulations studies in PSCAD/EMTDC and Matlab demonstrate the effectiveness of the proposed technique.
105

Aerosol Characterization and Analytical Modeling of Concentric Pneumatic and Flow Focusing Nebulizers for Sample Introduction

Kashani, Arash 17 February 2011 (has links)
A concentric pneumatic nebulizer (CPN) and a custom designed flow focusing nebulizer (FFN) are characterized. As will be shown, the classical Nukiyama-Tanasawa and Rizk-Lefebvre models lead to erroneous size prediction for the concentric nebulizer under typical operating conditions due to its specific design, geometry, dimension and different flow regimes. The models are then modified to improve the agreement with the experimental results. The size prediction of the modified models together with the spray velocity characterization are used to determine the overall nebulizer efficiency and also employed as input to a new Maximum Entropy Principle (MEP) based model to predict joint size-velocity distribution analytically. The new MEP model is exploited to study the local variation of size-velocity distribution in contrast to the classical models where MEP is applied globally to the entire spray cross section. As will be demonstrated, the velocity distribution of the classical MEP models shows poor agreement with experiments for the cases under study. Modifications to the original MEP modeling are proposed to overcome this deficiency. In addition, the new joint size-velocity distribution agrees better with our general understanding of the drag law and yields realistic results. / PhD
106

Optimisation stochastique et application financière

Sob Tchuakem, Pandry Wilson 10 1900 (has links) (PDF)
Notre travail concerne l'optimisation stochastique en temps continu et son application en finance. Nous donnons d'abord une formulation mathématique du problème, pour ensuite examiner deux approches de résolution du problème de contrôle optimal. La première, le principe du maximum stochastique, dans laquelle intervient la notion d'équations stochastiques rétrogrades (EDSRs), nous offre une condition nécessaire d'optimalité. Nous explorons également le cas où la condition devient suffisante. La deuxième approche quant à elle, est la programmation dynamique. Elle propose un candidat potentiel pour la solution optimale à travers la résolution d'une équation aux dérivées partielles appelée équation d'Hamilton Jacobi Bellman (HJB). Grâce au théorème de vérification, on pourra "vérifier" que le candidat est en fait la solution optimale. Enfin, nous appliquons ces deux techniques en résolvant le problème de sélection du portefeuille Moyenne-Variance avec ou sans contrainte d'interdiction de vente à découvert. ______________________________________________________________________________ MOTS-CLÉS DE L’AUTEUR : contrôle optimal, principe du maximum, EDSR, programmation dynamique, HJB. Théorème de Vérification, moyenne-variance.
107

Estimation et prévision améliorées du paramètre d'une loi binomiale

Nemiri, Ahmed 03 1900 (has links) (PDF)
Dans ce mémoire, on présente une étude sur l'estimation et la prévision du paramètre binomial. Le Chapitre 1 traite de l'estimation ponctuelle et de la prévision du paramètre binomial. En suivant l'approche de Brown (2008a), on commence ce chapitre par la description de six estimateurs : trivial, moyenne générale, Bayes empirique paramétrique avec la méthode des moments, Bayes empirique paramétrique avec la méthode du maximum de vraisemblance, Bayes empirique non paramétrique et James-Stein. Ensuite, on évalue ces estimateurs en se servant de la base de données de baseball 2005 de Brown (2008b) et on finit par la comparaison des performances de ces estimateurs entre elles, selon leurs écarts quadratiques totaux normalisés. Le Chapitre 2 traite de l'estimation par intervalle de confiance et de la prévision du paramètre binomial. Dans ce chapitre, on étudie cinq intervalles de confiance en suivant l'approche de Brown, Cai et DasGupta (1999) et (2001) : standard ICs, Wilson ICw, Agresti-Coull ICac, maximum de vraisemblance ICrv et Jeffreys bilatéral ICj. En premier, vu l'importance particulière de l'intervalle standard, on calcule théoriquement, avec un n modéré, la déviation du biais, de la variance et des coefficients d'asymétrie et d'aplatissement de la variable aléatoire Wn = (n1/2(p-p) / √pq) loi→ N (0,1) par rapport à leurs valeurs asymptotiques correspondantes 0, 1, 0 et 3. Ensuite, on approxime la probabilité de couverture et la longueur moyenne de chacun des cinq intervalles de confiance mentionnés plus haut par un développement d'Edgeworth d'ordres 1 et 2. Enfin, en se servant de la même base de données de baseball 2005, on détermine ces intervalles ainsi que leurs probabilités de couverture et leurs longueurs moyennes et on compare leurs performances entre elles, selon leurs probabilités de couverture et leurs longueurs moyennes. ______________________________________________________________________________ MOTS-CLÉS DE L’AUTEUR : estimateur de Bayes empirique paramétrique, méthode des moments, méthode du maximum de vraisemblance, estimateur de Bayes empirique non paramétrique, estimateur de James-Stein, développement d'Edgeworth d'ordres 1 et 2, intervalle de Wald (standard), intervalle de Wilson , intervalle d'Agresti-Coull, intervalle du rapport de vraisemblance, intervalle de Jeffreys bilatéral, programmes en R.
108

A Collapsing Method for Efficient Recovery of Optimal Edges

Hu, Mike January 2002 (has links)
In this thesis we present a novel algorithm, <I>HyperCleaning*</I>, for effectively inferring phylogenetic trees. The method is based on the quartet method paradigm and is guaranteed to recover the best supported edges of the underlying phylogeny based on the witness quartet set. This is performed efficiently using a collapsing mechanism that employs memory/time tradeoff to ensure no loss of information. This enables <I>HyperCleaning*</I> to solve the relaxed version of the Maximum-Quartet-Consistency problem feasibly, thus providing a valuable tool for inferring phylogenies using quartet based analysis.
109

Bayesian Analysis of Intratumoural Oxygen Data

Tang, Herbert Hoi Chi January 2009 (has links)
There is now ample evidence to support the notion that a lack of oxygen (hypoxia) within the tumour adversely affects the outcome of radiotherapy and whether a patient is able to remain disease free. Thus, there is increasing interest in accurately determining oxygen concentration levels within a tumour. Hypoxic regions arise naturally in cancerous tumours because of their abnormal vasculature and it is believed that oxygen is necessary in order for radiation to be effective in killing cancer cells. One method of measuring oxygen concentration within a tumour is the Eppendorf polarographic needle electrode; a method that is favored by many clinical researchers because it is the only device that is inserted directly into the tumour, and reports its findings in terms of oxygen partial pressure (PO2). Unfortunately, there are often anomalous readings in the Eppendorf measurements (negative and extremely high values) and there is little consensus as to how best to interpret the data. In this thesis, Bayesian methods are applied to estimate two measures commonly used to quantify oxygen content within a tumour in the current literature: the median PO2, and Hypoxic Proportion (HP5), the percentage of readings less than 5mmHg. The results will show that Bayesian methods of parameter estimation are able to reproduce the standard estimate for HP5 while providing an additional piece of information, the error bar, that quantifies how uncertain we believe our estimate to be. Furthermore, using the principle of Maximum Entropy, we will estimate the true median PO2 of the distribution instead of simply relying on the sample median, a value which may or may not be an accurate indication of the actual median PO2 inside the tumour. The advantage of the Bayesian method is that it takes advantage of probability theory and presents its results in the form of probability density functions. These probability density functions provide us with more information about the desired quantity than the single number that is produced in the current literature and allows us to make more accurate and informative statements about the measure of hypoxia that we are trying to estimate.
110

Target oriented branch & bound method for global optimization

Stix, Volker January 2002 (has links) (PDF)
We introduce a very simple but efficient idea for branch & bound (B&B) algorithms in global optimization (GO). As input for our generic algorithm, we need an upper bound algorithm for the GO maximization problem and a branching rule. The latter reduces the problem into several smaller subproblems of the same type. The new B&B approach delivers one global optimizer or, if stopped before finished, improved upper and lower bounds for the problem. Its main difference to commonly used B&B techniques is its ability to approximate the problem from above and from below while traversing the problem tree. It needs no supplementary information about the system optimized and does not consume more time than classical B&B techniques. Experimental results with the maximum clique problem illustrate the benefit of this new method. (author's abstract) / Series: Working Papers on Information Systems, Information Business and Operations

Page generated in 0.0414 seconds