Spelling suggestions: "subject:"found"" "subject:"sound""
511 |
Improving flood frequency analysis by integration of empirical and probabilistic regional envelope curvesGuse, Björn Felix January 2010 (has links)
Flood design necessitates discharge estimates for large recurrence intervals. However, in a flood frequency analysis, the uncertainty of discharge estimates increases with higher recurrence intervals, particularly due to the small number of available flood data. Furthermore, traditional distribution functions increase unlimitedly without consideration of an upper bound discharge. Hence, additional information needs to be considered which is representative for high recurrence intervals.
Envelope curves which bound the maximum observed discharges of a region are an adequate regionalisation method to provide additional spatial information for the upper tail of a distribution function. Probabilistic regional envelope curves (PRECs) are an extension of the traditional empirical envelope curve approach, in which a recurrence interval is estimated for a regional envelope curve (REC). The REC is constructed for a homogeneous pooling group of sites. The estimation of this recurrence interval is based on the effective sample years of data considering the intersite dependence among all sites of the pooling group.
The core idea of this thesis was an improvement of discharge estimates for high recurrence intervals by integrating empirical and probabilistic regional envelope curves into the flood frequency analysis. Therefore, the method of probabilistic regional envelope curves was investigated in detail. Several pooling groups were derived by modifying candidate sets of catchment descriptors and settings of two different pooling methods. These were used to construct PRECs. A sensitivity analysis shows the variability of discharges and the recurrence intervals for a given site due to the different assumptions. The unit flood of record which governs the intercept of PREC was determined as the most influential aspect.
By separating the catchments into nested and unnested pairs, the calculation algorithm for the effective sample years of data was refined. In this way, the estimation of the recurrence intervals was improved, and therefore the use of different parameter sets for nested and unnested pairs of catchments is recommended.
In the second part of this thesis, PRECs were introduced into a distribution function. Whereas in the traditional approach only discharge values are used, PRECs provide a discharge and its corresponding recurrence interval. Hence, a novel approach was developed, which allows a combination of the PREC results with the traditional systematic flood series while taking the PREC recurrence interval into consideration. An adequate mixed bounded distribution function was presented, which in addition to the PREC results also uses an upper bound discharge derived by an empirical envelope curve. By doing so, two types of additional information which are representative for the upper tail of a distribution function were included in the flood frequency analysis. The integration of both types of additional information leads to an improved discharge estimation for recurrence intervals between 100 and 1000 years. / Abschätzungen von Abflüssen mit hohen Wiederkehrintervallen werden vor allem für die Bemessung von Extremhochwässern benötigt. In der Hochwasserstatistik bestehen insbesondere für hohe Wiederkehrintervalle große Unsicherheiten, da nur eine geringe Anzahl an Messwerten für Hochwasserereignisse verfügbar ist. Zudem werden zumeist Verteilungsfunktionen verwendet, die keine obere Grenze beinhalten. Daher müssen zusätzliche Informationen zu den lokalen Pegelmessungen berücksichtigt werden, die den Extrembereich einer Verteilungsfunktion abdecken.
Hüllkurven ermitteln eine obere Grenze von Hochwasserabflüssen basierend auf beobachteten maximalen Abflusswerten. Daher sind sie eine geeignete Regionalisierungsmethode. Probabilistische regionale Hüllkurven sind eine Fortentwicklung des herkömmlichen Ansatzes der empirischen Hüllkurven. Hierbei wird einer Hüllkurve einer homogenen Region von Abflusspegeln ein Wiederkehrintervall zugeordnet. Die Berechnung dieses Wiederkehrintervalls basiert auf der effektiven Stichprobengröße und berücksichtigt die Korrelationsbeziehungen zwischen den Pegeln einer Region.
Ziel dieser Arbeit ist eine Verbesserung der Abschätzung von Abflüssen mit großen Wiederkehrintervallen durch die Integration von empirischen und probabilistischen Hüllkurven in die Hochwasserstatistik. Hierzu wurden probabilistische Hüllkurven detailliert untersucht und für eine Vielzahl an homogenen Regionen konstruiert. Hierbei wurden verschiedene Kombinationen von Einzugsgebietsparametern und Variationen von zwei Gruppierungsmethoden verwendet. Eine Sensitivitätsanalyse zeigt die Variabilität von Abfluss und Wiederkehrintervall zwischen den Realisationen als Folge der unterschiedlichen Annahmen. Die einflussreichste Größe ist der maximale Abfluss, der die Höhe der Hüllkurve bestimmt.
Eine Einteilung in genestete und ungenestete Einzugsgebiete führt zu einer genaueren Ermittlung der effektiven Stichprobe und damit zu einer verbesserten Abschätzung des Wiederkehrintervalls. Daher wird die Verwendung von zwei getrennten Parametersätzen für die Korrelationsfunktion zur Abschätzung des Wiederkehrintervalls empfohlen.
In einem zweiten Schritt wurden die probabilistischen Hüllkurven in die Hochwasserstatistik integriert. Da in traditionellen Ansätzen nur Abflusswerte genutzt werden, wird eine neue Methode präsentiert, die zusätzlich zu den gemessenen Abflusswerten die Ergebnisse der probabilistischen Hüllkurve – Abfluss und zugehöriges Wiederkehrintervall - berücksichtigt. Die Wahl fiel auf eine gemischte begrenzte Verteilungsfunktion, die neben den probabilistischen Hüllkurven auch eine absolute obere Grenze, die mit einer empirischen Hüllkurve ermittelt wurde, beinhaltet. Damit werden zwei Arten von zusätzlichen Informationen verwendet, die den oberen Bereich einer Verteilungsfunktion beschreiben. Die Integration von beiden führt zu einer verbesserten Abschätzung von Abflüssen mit Wiederkehrintervallen zwischen 100 und 1000 Jahren.
|
512 |
Electromagnetic Modelling for the Estimation of Wood ParametersSjödén, Therese January 2008 (has links)
Spiral grain in trees causes trouble to the wood industry, since boards sawn from trees with large grain angle have severe problems with form stability. Measurements of the grain angle under bark enable the optimisation of the refining process. The main objective of this thesis is to study the potential in estimating the grain angle by using microwaves. To do this, electromagnetic modelling and sensitivity analysis are combined. The dielectric properties of wood are different along and perpendicular to the wood fibres. This anisotropy is central for the estimation of the grain angle by means of microwaves. To estimate the grain angle, measurements are used together with electromagnetic modelling for the scattering from plane surfaces and cylinders. Measurement set-ups are proposed to determine the material parameters, such as the grain angle, for plane boards and cylindrical logs. For cylindrical logs both near-field and far-field measurements are investigated. In general, methods for determining material parameters exhibit large errors in the presence of noise. In this case, acceptable levels of these errors are achieved throug using few material parameters in the model: the grain angle and two dielectric parameters, characterising the electrical properties parallel and perpendicular to the fibres. From the case with plane boards, it is concluded that it is possible to make use of the anisotropy of wood to estimate the grain angle from the reflected electromagnetic field. This property forms then the basis of the proposed methods for the estimation of the grain angle in cylindrical logs. For the proposed methods, a priori knowledge of the moisture content or temperature of the wood is not needed. Furthermore, since the anisotropy persist also for frozen wood, the method is valid for temperatures below zero degrees Celsius. For the case with cylindrical logs, sensitivity analysis is applied to the near-field as well as the far-field methods, to analyse the parameter dependence with respect to the measurement model and the errors introduced by noise. In this sensitivity analysis, the Cram\'r-Rao bound is used, giving the best possible variance for estimating the parameters. The levels of the error bounds are high, indicating a problematic estimation problem. However, the feasibility of accurate estimation will be improved through higher signal-to-noise ratios, repeated measurements, and better antenna gain. The sensitivity analysis is also useful as an analytical tool to understand the difficulties and remedies related to the method used for determining material parameters, as well as a practical aid in the design of a measurement set-up. According to the thesis, grain angle estimation is possible with microwaves. The proposed methods are fast and suitable for further development for in-field use in the forest or in saw mills. / Träd med växtvridenhet orsakar problem i träindustrin eftersom brädor som sågats från träd med stor fibervinkel har problem med formstabiliteten och vrider sig då de torkas. Mätning av fibervinkeln under bark möjliggör optimering av förädlingsprocessen. I den här avhandlingen kombineras elektromagnetisk modellering och känslighetsanalys för att undersöka möjligheterna att bestämma fibervinkeln med mikrovågor. De elektriska egenskaperna hos trä är olika längs med och vinkelrätt mot fibrerna. Den här anisotropin är utgångspunkten för att bestämma fibervinkeln med hjälp av mikrovågor. För att skatta fibervinkeln används mätningar tillsammans med elektromagnetisk modellering för spridningen från plana ytor och cylindrar. Mätuppställningar föreslås för problemet att skatta materialparametrar, såsom fibervinkeln, i plana brädor och cylindriska stockar. För cylindriska stockar undersöks både närfälts- och fjärrfältsmätningar. I allmänhet har metoder för skattning av materialparametrar stora fel då systemet innehåller brus. Här erhålls acceptabla fel genom att använda få materialparametrar i modelleringen. De materialparametrar som används är fibervinkeln och två dielektriska parametrar som karakteriserar de elektriska egenskaperna längs med och vinkelrätt mot träfibern. Slutsatsen från fallet med plana brädor är att det är möjligt att använda anisotropin hos trä och dess påverkan på ett reflekterat elektromagnetiskt fält för att skatta fibervinkeln. Detta är grunden i de metoder som föreslås för cylindriska stockar. För samtliga metoder så gäller att varken fukthalt eller temperatur behöver vara kända på förhand. Eftersom anisotropin kvarstår också för fruset trä så är metoderna användbara även för temperaturer under noll grader Celsius. För fallet med cylindriska stockar används känslighetsanalys på både närfälts- och fjärrfältsmetoderna för att analysera parameterberoendet i uppmätt data samt felen som introduceras av brus. I den här känslighetsanalysen används Cram\'{e}r-Rao gränsen som ger den bästa möjliga variansen för skattning av parametrarna. Nivåerna på gränserna är höga vilket indikerar att det är ett svårt estimeringsproblem. Möjligheterna att skatta parametrarna noggrant förbättras genom bättre signal-brus förhållande, upprepade mätningar samt ökad antennstyrka. Känslighetsanalysen är också användbar som ett analytiskt verktyg för ökad förståelse för problem och möjligheter relaterade till metoden för att skatta parametrarna och som ett praktiskt stöd för design av en mätuppställning. Enligt avhandlingen är skattning av fibervinkel möjlig med mikrovågor. De föreslagna metoderna är snabba och lämpliga att utveckla vidare för användning i skogen eller i sågverk.
|
513 |
On Optimization in Design of Telecommunications Networks with Multicast and Unicast TrafficPrytz, Mikael January 2002 (has links)
No description available.
|
514 |
Algebraic Curves over Finite FieldsRovi, Carmen January 2010 (has links)
This thesis surveys the issue of finding rational points on algebraic curves over finite fields. Since Goppa's construction of algebraic geometric codes, there has been great interest in finding curves with many rational points. Here we explain the main tools for finding rational points on a curve over a nite eld and provide the necessary background on ring and field theory. Four different articles are analyzed, the first of these articles gives a complete set of table showing the numbers of rational points for curves with genus up to 50. The other articles provide interesting constructions of covering curves: covers by the Hemitian curve, Kummer extensions and Artin-Schreier extensions. With these articles the great difficulty of finding explicit equations for curves with many rational points is overcome. With the method given by Arnaldo García in [6] we have been able to nd examples that can be used to define the lower bounds for the corresponding entries in the tables given in http: //wins.uva.nl/~geer, which to the time of writing this Thesis appear as "no information available". In fact, as the curves found are maximal, these entries no longer need a bound, they can be given by a unique entry, since the exact value of Nq(g) is now known. At the end of the thesis an outline of the construction of Goppa codes is given and the NXL and XNL codes are presented.
|
515 |
Target Localization Methods For Frequency-only Mimo RadarKalkan, Yilmaz 01 September 2012 (has links) (PDF)
This dissertation is focused on developing the new target localization and the target velocity estimation methods for frequency-only multi-input, multi-output (MIMO) radar systems with widely separated antennas. If the frequency resolutions of the transmitted signals are enough, only the received frequencies and the Doppler shifts can be used to find the position of the target.
In order to estimate the position and the velocity of the target, most multistatic radars or radar networks use multiple independent measurements from the target such as time-of-arrival (TOA), angle-of-arrival (AOA) and frequency-of-arrival (FOA). Although, frequency based systems have many advantages, frequency based target localization methods are very limited in literature because of the fact that highly non-linear equations are involved in solutions. In this thesis, alternative target localization and the target velocity estimation methods are proposed for frequency-only systems with low complexity.
One of the proposed methods is able to estimate the target position and the target velocity based on the measurements of the Doppler frequencies. Moreover, the target movement direction can be estimated efficiently. This method is referred to as " / Target Localization via Doppler Frequencies - TLDF" / and it can be used for not only radar but also all frequency-based localization systems such as Sonar or Wireless Sensor Networks.
Besides the TLDF method, two alternative target position estimation methods are proposed as well. These methods are based on the Doppler frequencies, but they requires the target velocity vector to be known. These methods are referred to as " / Target Localization via Doppler Frequencies and Target Velocity - TLD& / V methods" / and can be divided two sub-methods. One of them is based on the derivatives of the Doppler Frequencies and hence it is called as " / Derivated Doppler - TLD& / V-DD method" / . The second method uses the Maximum Likelihood (ML) principle with grid search, hence it is referred to as " / Sub-ML, TLD& / V-subML method" / .
The more realistic signal model for ground based, widely separated MIMO radar is formed as including Swerling target fluctuations and the Doppler frequencies. The Cramer-Rao Bounds (CRB) are derived for the target position and the target velocity estimations for this signal model. After the received signal is constructed, the Doppler frequencies are estimated by using the DFT based periodogram spectral estimator. Then, the estimated Doppler frequencies are collected in a fusion center to localize the target.
Finally, the multiple targets localization problem is investigated for frequency-only MIMO radar and a new data association method is proposed. By using the TLDF method, the validity of the method is simulated not only for the targets which are moving linearly but also for the maneuvering targets.
The proposed methods can localize the target and estimate the velocity of the target with less error according to the traditional isodoppler based method. Moreover, these methods are superior than the traditional method with respect to the computational complexity. By using the simulations with MATLAB, the superiorities of the proposed methods to the traditional method are shown.
|
516 |
Short Proofs May Be Spacious : Understanding Space in ResolutionNordström, Jakob January 2008 (has links)
Om man ser på de bästa nu kända algoritmerna för att avgöra satisfierbarhet hos logiska formler så är de allra flesta baserade på den så kallade DPLL-metoden utökad med klausulinlärning. De två viktigaste gränssättande faktorerna för sådana algoritmer är hur mycket tid och minne de använder, och att förstå sig på detta är därför en fråga som har stor praktisk betydelse. Inom området beviskomplexitet svarar tids- och minnesåtgång mot längd och minne hos resolutionsbevis för formler i konjunktiv normalform (CNF-formler). En lång rad arbeten har studerat dessa mått och även jämfört dem med bredden av bevis, ett annat mått som visat sig höra nära samman med både längd och minne. Mer formellt är längden hos ett bevis antalet rader, dvs. klausuler, bredden är storleken av den största klausulen, och minnet är maximala antalet klausuler som man behöver komma ihåg samtidigt om man under bevisets gång bara får dra nya slutsatser från klausuler som finns sparade. För längd och bredd har man lyckats visa en rad starka resultat men förståelsen av måttet minne har lämnat mycket i övrigt att önska. Till exempel så är det känt att minnet som behövs för att bevisa en formel är minst lika stort som den nödvändiga bredden, men det har varit en öppen fråga om minne och bredd kan separeras eller om de två måtten mäter "samma sak" i den meningen att de alltid är asymptotiskt lika stora för en formel. Det har också varit okänt om det faktum att det finns ett kort bevis för en formel medför att formeln också kan bevisas i litet minne (motsvarande påstående är sant för längd jämfört med bredd) eller om det tvärtom kan vara så att längd och minne är "helt orelaterade" på så sätt att även korta bevis kan kräva maximal mängd minne. I denna avhandling presenterar vi först ett förenklat bevis av trade-off-resultatet för längd jämfört med minne i (Hertel och Pitassi 2007) och visar hur samma idéer kan användas för att visa ett par andra exponentiella avvägningar i relationerna mellan olika beviskomplexitetsmått för resolution. Sedan visar vi att det finns formler som kan bevisas i linjär längd och konstant bredd men som kräver en mängd minne som växer logaritmiskt i formelstorleken, vilket vi senare förbättrar till kvadratroten av formelstorleken. Dessa resultat separerar således minne och bredd. Genom att använda andra men besläktade idéer besvarar vi därefter frågan om hur minne och längd förhåller sig till varandra genom att separera dem på starkast möjliga sätt. Mer precist visar vi att det finns CNF-formler av storlek O(n) som har resolutionbevis i längd O(n) och bredd O(1) men som kräver minne minst Omega(n/log n). Det gemensamma temat för dessa resultat är att vi studerar formler som beskriver stenläggningsspel, eller pebblingspel, på riktade acykliska grafer. Vi bevisar undre gränser för det minne som behövs för den så kallade pebblingformeln över en graf uttryckt i det svart-vita pebblingpriset för grafen i fråga. Slutligen observerar vi att vår optimala separation av minne och längd i själva verket är ett specialfall av en mer generell sats. Låt F vara en CNF-formel och f:{0,1}^d->{0,1} en boolesk funktion. Ersätt varje variabel x i F med f(x_1, ..., x_d) och skriv om denna nya formel på naturligt sätt som en CNF-formel F[f]. Då gäller, givet att F och f har rätt egenskaper, att F[f] kan bevisas i resolution i väsentligen samma längd och bredd som F, men att den minimala mängd minne som behövs för F[f] är åtminstone lika stor som det minimala antalet variabler som måste förekomma samtidigt i ett bevis för F. / Most state-of-the-art satisfiability algorithms today are variants of the DPLL procedure augmented with clause learning. The two main bottlenecks for such algorithms are the amounts of time and memory used. Thus, understanding time and memory requirements for clause learning algorithms, and how these requirements are related to one another, is a question of considerable practical importance. In the field of proof complexity, these resources correspond to the length and space of resolution proofs for formulas in conjunctive normal form (CNF). There has been a long line of research investigating these proof complexity measures and relating them to the width of proofs, another measure which has turned out to be intimately connected with both length and space. Formally, the length of a resolution proof is the number of lines, i.e., clauses, the width of a proof is the maximal size of any clause in it, and the space is the maximal number of clauses kept in memory simultaneously if the proof is only allowed to infer new clauses from clauses currently in memory. While strong results have been established for length and width, our understanding of space has been quite poor. For instance, the space required to prove a formula is known to be at least as large as the needed width, but it has remained open whether space can be separated from width or whether the two measures coincide asymptotically. It has also been unknown whether the fact that a formula is provable in short length implies that it is also provable in small space (which is the case for length versus width), or whether on the contrary these measures are "completely unrelated" in the sense that short proofs can be maximally complex with respect to space. In this thesis, as an easy first observation we present a simplified proof of the recent length-space trade-off result for resolution in (Hertel and Pitassi 2007) and show how our ideas can be used to prove a couple of other exponential trade-offs in resolution. Next, we prove that there are families of CNF formulas that can be proven in linear length and constant width but require space growing logarithmically in the formula size, later improving this exponentially to the square root of the size. These results thus separate space and width. Using a related but different approach, we then resolve the question about the relation between space and length by proving an optimal separation between them. More precisely, we show that there are families of CNF formulas of size O(n) that have resolution proofs of length O(n) and width O(1) but for which any proof requires space Omega(n/log n). All of these results are achieved by studying so-called pebbling formulas defined in terms of pebble games over directed acyclic graphs (DAGs) and proving lower bounds on the space requirements for such formulas in terms of the black-white pebbling price of the underlying DAGs. Finally, we observe that our optimal separation of space and length is in fact a special case of a more general phenomenon. Namely, for any CNF formula F and any Boolean function f:{0,1}^d->{0,1}, replace every variable x in F by f(x_1, ..., x_d) and rewrite this new formula in CNF in the natural way, denoting the resulting formula F[f]. Then if F and f have the right properties, F[f] can be proven in resolution in essentially the same length and width as F but the minimal space needed for F[f] is lower-bounded by the number of variables that have to be mentioned simultaneously in any proof for F. / QC 20100831
|
517 |
Transmitting Quantum Information Reliably across Various Quantum ChannelsOuyang, Yingkai January 2013 (has links)
Transmitting quantum information across quantum channels is an important task. However quantum information is delicate, and is easily corrupted. We address the task of protecting quantum information from an information theoretic perspective -- we encode some message qudits into a quantum code, send the encoded quantum information across the noisy quantum channel, then recover the message qudits by decoding. In this dissertation, we discuss the coding problem from several perspectives.}
The noisy quantum channel is one of the central aspects of the quantum coding problem, and hence quantifying the noisy quantum channel from the physical model is an important problem.
We work with an explicit physical model -- a pair of initially decoupled quantum harmonic oscillators interacting with a spring-like coupling, where the bath oscillator is initially in a thermal-like state. In particular, we treat the completely positive and trace preserving map on the system as a quantum channel, and study the truncation of the channel by truncating its Kraus set. We thereby derive the matrix elements of the Choi-Jamiolkowski operator of the corresponding truncated channel, which are truncated transition amplitudes. Finally, we give a computable approximation for these truncated transition amplitudes with explicit error bounds, and perform a case study of the oscillators in the off-resonant and weakly-coupled regime numerically.
In the context of truncated noisy channels, we revisit the notion of approximate error correction of finite dimension codes. We derive a computationally simple lower bound on the worst case entanglement fidelity of a quantum code, when the truncated recovery map of Leung et. al. is rescaled. As an application, we apply our bound to construct a family of multi-error correcting amplitude damping codes that are permutation-invariant. This demonstrates an explicit example where the specific structure of the noisy channel allows code design out of the stabilizer formalism via purely algebraic means.
We study lower bounds on the quantum capacity of adversarial channels, where we restrict the selection of quantum codes to the set of concatenated quantum codes.
The adversarial channel is a quantum channel where an adversary corrupts a fixed fraction of qudits sent across a quantum channel in the most malicious way possible. The best known rates of communicating over adversarial channels are given by the quantum Gilbert-Varshamov (GV) bound, that is known to be attainable with random quantum codes. We generalize the classical result of Thommesen to the quantum case, thereby demonstrating the existence of concatenated quantum codes that can asymptotically attain the quantum GV bound. The outer codes are quantum generalized Reed-Solomon codes, and the inner codes are random independently chosen stabilizer codes, where the rates of the inner and outer codes lie in a specified feasible region.
We next study upper bounds on the quantum capacity of some low dimension quantum channels.
The quantum capacity of a quantum channel is the maximum rate at which quantum information can be transmitted reliably across it, given arbitrarily many uses of it. While it is known that random quantum codes can be used to attain the quantum capacity, the quantum capacity of many classes of channels is undetermined, even for channels of low input and output dimension. For example, depolarizing channels are
important quantum channels, but do not have tight numerical bounds.
We obtain upper bounds on the quantum capacity of some unital and non-unital channels
-- two-qubit Pauli channels, two-qubit depolarizing channels, two-qubit locally symmetric channels,
shifted qubit depolarizing channels, and shifted two-qubit Pauli channels --
using the coherent information of some degradable channels. We use the notion
of twirling quantum channels, and Smith and Smolin's method of constructing
degradable extensions of quantum channels extensively. The degradable channels we
introduce, study and use are two-qubit amplitude damping channels. Exploiting the
notion of covariant quantum channels, we give sufficient conditions for the quantum
capacity of a degradable channel to be the optimal value of a concave program with
linear constraints, and show that our two-qubit degradable amplitude damping channels have this property.
|
518 |
TOA-Based Robust Wireless Geolocation and Cramér-Rao Lower Bound Analysis in Harsh LOS/NLOS EnvironmentsYin, Feng, Fritsche, Carsten, Gustafsson, Fredrik, Zoubir, Abdelhak M January 2013 (has links)
We consider time-of-arrival based robust geolocation in harsh line-of-sight/non-line-of-sight environments. Herein, we assume the probability density function (PDF) of the measurement error to be completely unknown and develop an iterative algorithm for robust position estimation. The iterative algorithm alternates between a PDF estimation step, which approximates the exact measurement error PDF (albeit unknown) under the current parameter estimate via adaptive kernel density estimation, and a parameter estimation step, which resolves a position estimate from the approximate log-likelihood function via a quasi-Newton method. Unless the convergence condition is satisfied, the resolved position estimate is then used to refine the PDF estimation in the next iteration. We also present the best achievable geolocation accuracy in terms of the Cramér-Rao lower bound. Various simulations have been conducted in both real-world and simulated scenarios. When the number of received range measurements is large, the new proposed position estimator attains the performance of the maximum likelihood estimator (MLE). When the number of range measurements is small, it deviates from the MLE, but still outperforms several salient robust estimators in terms of geolocation accuracy, which comes at the cost of higher computational complexity.
|
519 |
Lost Tramps & Cherry TigersBender, John Brett 23 July 2009 (has links)
These seven stories chronicle the author's apprenticeship as a fiction writer. Four are written in the first-person point of view, two are in a limited third-person, and one is written in third-person objective. The stories vary considerably with respect to character, tone, and setting; however, all may be said to explore the abjection and isolation of growing up in United States.
|
520 |
Algorithmes Branch and Bound parallèles hétérogènes pour environnements multi-coeurs et multi-GPUChakroun, Imen 28 June 2013 (has links) (PDF)
Les algorithmes Branch and Bound (B&B) sont attractifs pour la résolution exacte de problèmes d'optimisation combinatoire (POC) par exploration d'un espace de recherche arborescent. Néanmoins, ces algorithmes sont très gourmands en temps de calcul pour des instances de problèmes de grande taille (exemple : benchmarks de Taillard pour FSP) même en utilisant le calcul sur grilles informatiques [Mezmaz et al., IEEE IPDPS'2007]. Le calcul massivement parallèle fourni à travers les plates-formes de calcul hétérogènes d'aujourd'hui [TOP500 ] est requis pour traiter effi cacement de telles instances. Le dé fi est alors d'exploiter tous les niveaux de parallélisme sous-jacents et donc de repenser en conséquence les modèles parallèles des algorithmes B&B. Dans cette thèse, nous nous attachons à revisiter la conception et l'implémentation des ces algorithmes pour la résolution de POC de grande taille sur (larges) plates-formes de calcul multi-coeurs et multi-GPUs. Le problème d'ordonnancement Flow-Shop (FSP) est considéré comme étude de cas. Une étude expérimentale préliminaire sur quelques grandes instances du FSP a révélé que l'arbre de recherche est hautement irrégulier (en forme et en taille) et très large (milliards de milliards de noeuds), et que l'opérateur d'évaluation des bornes est exorbitant en temps de calcul (environ 97% du temps de B&B). Par conséquent, notre première contribution est de proposer une approche GPU avec un seul coeur CPU (GB&B) dans laquelle seul l'opérateur d'évaluation est exécuté sur GPU. L'approche traite deux dé fis: la divergence de threads et l'optimisation de la gestion de la mémoire hiérarchique du GPU. Comparée à une version séquentielle, des accélérations allant jusqu'à ( 100) sont obtenues sur Nvidia Tesla C2050. L'analyse des performances de GB&B a montré que le surcoût induit par le transfert des données entre le CPU et le GPU est élevé. Par conséquent, l'objectif de la deuxième contribution est d'étendre l'approche (LL-GB&B) a fin de minimiser la latence de communication CPU-GPU. Cet objectif est réalisé grâce à une parallélisation à grain fin sur GPU des opérateurs de séparation et d'élagage. Le défi majeur relevé ici est la divergence de threads qui est due à la nature fortement irrégulière citée ci-dessus de l'arbre exploré. Comparée à une exécution séquentielle, LL-GB&B permet d'atteindre des accélérations allant jusqu'à ( 160) pour les plus grandes instances. La troisième contribution consiste à étudier l'utilisation combinée des GPUs avec les processeurs multi-coeurs. Deux scénarios ont été explorés conduisant à deux approches: une concurrente (RLL-GB&B) et une coopérative (PLL-GB&B). Dans le premier cas, le processus d'exploration est eff ectué simultanément par le GPU et les coeurs du CPU. Dans l'approche coopérative, les coeurs du CPU préparent et transfèrent les sous-problèmes en utilisant le streaming CUDA tandis que le GPU eff ectue l'exploration. L'utilisation combinée du multi-coeur et du GPU a montré que l'utilisation de RLL-GB&B n'est pas bénéfi que et que PLL-GB&B permet une amélioration allant jusqu'à (36%) par rapport à LL-GB&B. Sachant que récemment des grilles de calcul comme Grid5000 (certains sites) ont été équipées avec des GPU, la quatrième contribution de cette thèse traite de la combinaison du calcul sur GPU et multi-coeur avec le calcul distribué à grande échelle. Pour ce faire, les diff érentes approches proposées ont été réunies dans un méta-algorithme hétérofigène qui sélectionne automatiquement l'algorithme à déployer en fonction de la con figuration matérielle cible. Ce méta-algorithme est couplé avec l'approche B&B@Grid proposée dans [Mezmaz et al., IEEE IPDPS'2007]. B&B@Grid répartit les unités de travail (sous-espaces de recherche codés par des intervalles) entre les noeuds de la grille tandis que le méta-algorithme choisit et déploie localement un algorithme de B&B parallèle sur les intervalles reçus. L'approche combinée nous a permis de résoudre à l'optimalité et e fficacement les instances (20 20) de Taillard.
|
Page generated in 0.0465 seconds