• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 213
  • 45
  • 27
  • 26
  • 24
  • 21
  • 16
  • 15
  • 12
  • 7
  • 6
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 456
  • 71
  • 56
  • 55
  • 47
  • 40
  • 39
  • 35
  • 31
  • 31
  • 30
  • 30
  • 29
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

3D-mesh segmentation: automatic evaluation and a new learning-based method

Benhabiles, Halim 18 October 2011 (has links) (PDF)
Dans cette thèse, nous abordons deux problèmes principaux, à savoir l'évaluation quantitative des algorithmes de segmentation de maillages ainsi que la segmentation de maillages par apprentissage en exploitant le facteur humain. Nous proposons les contributions suivantes : - Un benchmark dédié à l'évaluation des algorithmes de segmentation de maillages 3D. Le benchmark inclut un corpus de segmentations vérités-terrains réalisées par des volontaires ainsi qu'une nouvelle métrique de similarité pertinente qui quantifie la cohérence entre ces segmentations vérités-terrains et celles produites automatique- ment par un algorithme donné sur les mêmes modèles. De plus, nous menons un ensemble d'expérimentations, y compris une expérimentation subjective, pour respectivement démontrer et valider la pertinence de notre benchmark. - Un algorithme de segmentation par apprentissage. Pour cela, l'apprentissage d'une fonction d'arête frontière est effectué, en utilisant plusieurs critères géométriques, à partir d'un ensemble de segmentations vérités-terrains. Cette fonction est ensuite utilisée, à travers une chaîne de traitement, pour segmenter un nouveau maillage 3D. Nous montrons, à travers une série d'expérimentations s'appuyant sur différents benchmarks, les excellentes performances de notre algorithme par rapport à ceux de l'état de l'art. Nous présentons également une application de notre algorithme de segmentation pour l'extraction de squelettes cinématiques pour les maillages 3D dynamiques.
192

DSP Platform Benchmarking : DSP Platform Benchmarking

Xinyuan, Luo January 2009 (has links)
Benchmarking of DSP kernel algorithms was conducted in the thesis on a DSP processor for teaching in the course TESA26 in the department of Electrical Engineering. It includes benchmarking on cycle count and memory usage. The goal of the thesis is to evaluate the quality of a single MAC DSP instruction set and provide suggestions for further improvement in instruction set architecture accordingly. The scope of the thesis is limited to benchmark the processor only based on assembly coding. The quality check of compiler is not included. The method of the benchmarking was proposed by BDTI, Berkeley Design Technology Incorporations, which is the general methodology used in world wide DSP industry. Proposals on assembly instruction set improvements include the enhancement of FFT and DCT. The cycle cost of the new FFT benchmark based on the proposal was XX% lower, showing that the proposal was right and qualified. Results also show that the proposal promotes the cycle cost score for matrix computing, especially matrix multiplication. The benchmark results were compared with general scores of single MAC DSP processors offered by BDTI.
193

Application of a heterogeneous coarse-mesh transport method (COMET) to radiation therapy problems

Satterfield, Megan E. 20 November 2006 (has links)
In recent years, there has been much improvement in radiation therapy delivery systems used in the treatment of cancer; however, in order to fully exploit this enhancement, the computational methodology associated with radiation therapy must improve as well. It is important to accurately determine where the radiation is depositing its energy within the patient. The treatment should allow for the maximal dose at the tumor site, while minimal radiation dose to the surrounding health tissue and structures. In the Computational Reactor and Medical Physics Group here at Georgia Tech, a heterogeneous coarse-mesh transport method (COMET) has been developed for neutron transport to analyze whole-core criticality. COMET decomposes a large, heterogeneous global problem into a set of small fixed source local problems. Response functions, or rather detailed solutions, are obtained for each unique local problem. These response functions are all precomputed and stored in a library. The solution to the global solution is then bound by a linear superposition of the local problems. In this project, COMET is applied for the first time to the transport of photons in human tissues. The parameter of interest in this case is the amount of energy (dose) deposition in tissue. To determine the strengths and weaknesses of the current system, it is important to construct benchmark problems for comparison. This project will encompass a number of benchmarks. The first will involve modeling a simple two-dimensional water phantom. A second benchmark problem involves the use of a heterogeneous phantom composed of different tissues. A third benchmark problem will involve transport through slabs of aluminum, water, and lung tissue. A last, more clinically relevant benchmark problem will involve using the data from a CT scan. For each of these cases the results from COMET will be compared to the computational results obtained from EGSnrc, a Monte Carlo particle transport code. In this study, it was found that generally the results using COMET were comparable with those obtained from the Monte Carlo solutions of EGSnrc. The COMET results were also typically found thousands of times faster than the reference solution.
194

Studie av utvecklingsverktyg med inriktning mot PLC-system

Brax, christoffer January 1999 (has links)
<p>Datoranvändningen i samhället växer för varje dag. Det är då viktigt att programvara håller hög kvalité, då vissa programvaror styr kritiska maskiner som exempelvis flygplan. Ett sätt att få kvalitativa programvaror är att använda bra utvecklingsverktyg. I detta arbete utvärderas fem olika utvecklingsverktyg: GNAT (Ada), Microsoft Visual C++, Microsoft J++, Borland Delphi och Active Perl. Inriktningen på utvärderingen är mot utveckling av programvara för PLC-system. Det som utvärderats är effektivitet av genererad kod, kompileringstid, storlek på genererad kod, kodportabilitet och utbud av färdiga komponenter. Undersökningen har genomförts med hjälp av praktiska tester.</p>
195

由國營企業轉成民營企業:由中華電信經驗對ONATAL之參考價值

Mamadou Unknown Date (has links)
Around the world, countries are moving towards a market economy in order to integrate the global marketplace. Telecommunications are among the industries concerned the most in this trend, as governments are significantly reducing their involvement in this industry through liberalization and/or partial or complete privatization of their national telecommunications corporations. For Burkina Faso it is no different, as ONATEL, the national telecommunications company in Burkina Faso, has been caught in this trend. Since December 1998, the government of Burkina Faso initiated a reform of its telecommunications sector with the overall goal to achieve the liberalization of telecommunications services, and accomplish a mixed ownership of ONATEL. The objective of this study is to review the ongoing privatization of ONATEL based on an analysis of general practices regarding economic reforms applied elsewhere, and then make recommendations for both the government of Burkina Faso and ONATEL, for a successful implementation of the process and the national telecommunication policies. We accomplish this objective through four research questions. The first one correlates privatization and economic development with the aim to see how the divesture of ONATEL can foster the telecommunications development in Burkina Faso. The second one emphasizes the government chosen strategy to privatize ONATEL, allowing for a review of alternative privatization methods and the rationale behind the government’s option. The third research questions deals with ONATEL’s strategies to sustain its development amongst an environment of increased competition. This question facilitates an assessment of the firm’s preparation for competition and allows for the formulation of some recommendations in such regard. The last research question touches on the privatization process of Chunghwa Telecom, the once, state-owned Telecommunications Company in Taiwan. This final aspect of the research helps to extract effective lessons that can be applicable for the government of Burkina Faso and ONATEL, by analyzing and understanding the privatization experience in Chunghwa Telecom’s methods and formulations of business strategies, for their own privatization.
196

Instruction Timing Analysis for Linux/x86-based Embedded and Desktop Systems

John, Tobias 19 October 2005 (has links) (PDF)
Real-time aspects are becoming more important in standard desktop PC environments and x86 based processors are being utilized in embedded systems more often. While these processors were not created for use in hard real time systems, they are fast and inexpensive and can be used if it is possible to determine the worst case execution time. Information on CPU caches (L1, L2) and branch prediction architecture is necessary to simulate best and worst cases in execution timing, but is often not detailed enough and sometimes not published at all. This document describes how the underlying hardware can be analysed to obtain this information.
197

La mesure de performance dans les cartes à puce

Cordry, Julien 30 November 2009 (has links)
La mesure de performance est utilisée dans tous les systèmes informatiques pour garantir la meilleure performance pour le plus faible coût possible. L'établissement d'outils de mesures et de métriques a permis d'établir des bases de comparaison entre ordinateurs. Bien que le monde de la carte à puce ne fasse pas exception, les questions de sécurité occupent le devant de la scène pour celles-ci. Les efforts allant vers une plus grande ouverture des tests et de la mesure de performance restent discrets. Les travaux présentés ici ont pour objectif de proposer une méthode de mesure de la performance dans les plates-formes Java Card qui occupent une part considérable du marché de la carte à puce dans le monde d’aujourd’hui. Nous étudions en détails les efforts fournis par d'autres auteurs sur le sujet de la mesure de performance et en particulier la mesure de performance sur les cartes à puce. Un grand nombre de ces travaux restent embryonnaires ou ignorent certains aspects des mesures. Un des principaux défauts de ces travaux est le manque de rapport entre les mesures effectuées et les applications généralement utilisées dans les cartes à puce. Les cartes à puce ont par ailleurs des besoins importants en termes de sécurité. Ces besoins rendent les cartes difficiles à analyser. L'approche logique consiste à considérer les cartes à puce comme des boites noires. Après l'introduction de méthodologies de mesures de performance pour les cartes à puce, nous choisirons les outils et les caractéristiques des tests que nous voulons faire subir aux cartes, et nous analyserons la confiance à accorder aux données ainsi récoltées. Enfin une application originale des cartes à puce est proposée et permet de valider certains résultats obtenus. / Performance measurements are used in computer systems to guaranty the best performance at the lowest cost. Establishing measurement tools and metrics has helped build comparison scales between computers. Smart cards are no exception. But the centred stage of the smart card industry is mostly busy with security issues. Efforts towards a better integration of performance tests are still modest. Our work focused on a better approach in estimating the execution time within Java Card platforms. Those platforms constitute a big part of the modern smart card market share especially with regards to multi-applicative environments. After introducing some methodologies to better measure the performance of Java Cards, we detail the tools and the tests that we mean to use on smart cards. We will thereafter analyze the data obtained in this way. Finally, an original application for smart cards is proposed. We used it to validate some points about the results.
198

Hierarchical Bayesian Benchmark Dose Analysis

Fang, Qijun January 2014 (has links)
An important objective in statistical risk assessment is estimation of minimum exposure levels, called Benchmark Doses (BMDs) that induce a pre-specified Benchmark Response (BMR) in a target population. Established inferential approaches for BMD analysis typically involve one-sided, frequentist confidence limits, leading in practice to what are called Benchmark Dose Lower Limits (BMDLs). Appeal to hierarchical Bayesian modeling and credible limits for building BMDLs is far less developed, however. Indeed, for the few existing forms of Bayesian BMDs, informative prior information is seldom incorporated. Here, a new method is developed by using reparameterized quantal-response models that explicitly describe the BMD as a target parameter. This potentially improves the BMD/BMDL estimation by combining elicited prior belief with the observed data in the Bayesian hierarchy. Besides this, the large variety of candidate quantal-response models available for applying these methods, however, lead to questions of model adequacy and uncertainty. Facing this issue, the Bayesian estimation technique here is further enhanced by applying Bayesian model averaging to produce point estimates and (lower) credible bounds. Implementation is facilitated via a Monte Carlo-based adaptive Metropolis (AM) algorithm to approximate the posterior distribution. Performance of the method is evaluated via a simulation study. An example from carcinogenicity testing illustrates the calculations.
199

A model for managing pension funds with benchmarking in an inflationary market

Nsuami, Mozart January 2011 (has links)
<p>Aggressive fiscal and monetary policies by governments of countries and central banks in developed markets could somehow push inflation to some very high level in the long run. Due to the decreasing of pension fund benefits and increasing inflation rate, pension companies are selling inflation-linked products to hedge against inflation risk. Such companies are seriously considering the possible effects of inflation volatility on their investment, and some of them tend to include inflationary allowances in the pension payment plan. In this dissertation we study the management of pension funds of the defined contribution type in the presence of inflation-recession. We study how the fund manager maximizes his fund&rsquo / s wealth when the salaries and stocks are affected by inflation. In this regard, we consider the case of a pension company which invests in a stock, inflation-linked bonds and a money market account, while basing its investment on the contribution of the plan member. We use a benchmarking approach and martingale methods to compute an optimal strategy which maximizes the fund wealth.</p>
200

Application of translational addition theorems to the study of the magnetization of systems of ferromagnetic spheres

Anthonys, Gehan 26 August 2014 (has links)
The main objective of this research is the study of the magnetization of ferromagnetic spheres in the presence of external magnetic fields. The exact analytical solutions derived in this thesis are benchmark solutions, valuable in testing the correctness and accuracy of various approximate models and numerical methods. The total scalar magnetic potential outside the spheres, related to the magnetic field intensity, is obtained by the superposition of the potentials due to all spheres and the potential corresponding to the external field. The translational addition theorems for scalar Laplacian functions are used to solve boundary value by imposing exact boundary conditions. The scalar magnetic potential inside each sphere, related to the magnetic flux density, also satisfies the Laplace equation, which is solved by imposing the boundary conditions known from the solution of the outside field. Finally, the expressions derived are used to generate numerical results of controllable accuracy for field quantities.

Page generated in 0.0705 seconds