Spelling suggestions: "subject:"[een] EDGE"" "subject:"[enn] EDGE""
441 |
Defence technological edge program management : a search for more reliable outcomesMcNally, Raymond Gordon, n/a January 2002 (has links)
During the early 1960s, the US Department of Defense, under Secretary Robert
McNamara implemented for the first time in national government a Planning-Programming and Budgeting System (PPBS) in order to improve effectiveness
and efficiency in defence program management. McNamara sought improved
effectiveness through a formal five-year program designed to reduce costs. He
also sought efficient methods of managing joint service strategy coordination,
requirements' analysis and planning, and improved alignment between the
choice of requirements and the size and nature of the acquisition program. The
Australian Defence Organisation (ADO) and the UK Ministry of Defence later
sought to implement their own forms of PPBS. Recently, both have introduced
program management innovations that seek to achieve more reliably effective
and efficient outcomes.
The thesis has reviewed program management theory with a particular focus on
its implementation challenges relating to strategic management, program
review, personnel management and program coordination. It has sought to
answer the research problem: Which specific management designs could offer
better outcomes for Australian defence technological edge programs? The
thesis' central proposition is that the greatest opportunities for improving
defence program outcomes occur when classic PPBS concepts are
implemented within a Program Management System that incorporates Zero-
Base budgeting (ZBB), Management by Objectives (MBO), and Matrix
structural systems. All of these systems, either alone or in combination seek to
enhance program quality, scheduling, financial management and evaluation.
The research used in-depth case study research based on qualitative data found
within a selection of recent Australian National Audit Office reports, and other
public records. The central proposition is subjected to dynamic reliabilityrelated
contingency analysis and evaluation. The thesis concludes with the
proposition that if managers were to implement a contingency based integrated
mixture of the above-mentioned systems they could expect improved
technological edge program outcomes.
|
442 |
Link QualityControl (LQC) i GPRS/EGPRSSeddigh, Sorosh January 2003 (has links)
<p>This master thesis has been done at Enea Epact AB. The purpose of this thesis is to develop and implement a Link Quality Control algorithm for GPRS/EPGRS in the current testing tool. A Link Quality Control (LQC) shall take quality values from mobile stations and base stations and decide a codingsscheme that opimizes the throughput of data. </p><p>The Advantage with LQC is that it adapts the used coding scheme to the channel quality. If the channel quality is too bad for the used coding scheme, a slower coding scheme with more redundancy should be selected. On the other hand, if the channel quality is too good for the used coding scheme, LQC should recommend a faster coding scheme with less redundancy. </p><p>The testing tool is now using a static coding schme that doesn’t change during a data session. An LQC is therefore necessary for better simulation of the traffic and to make the tests more real.</p>
|
443 |
Three dimensional object recognition for robot conveyor pickingWikander, Gustav January 2009 (has links)
<p>Shape-based matching (SBM) is a method for matching objects in greyscale images. It extracts edges from search images and matches them to a model using a similarity measure. In this thesis we extend SBM to find the tilt and height position of the object in addition to the z-plane rotation and x-y-position. The search is conducted using a scale pyramid to improve the search speed. A 3D matching can be done for small tilt angles by using SBM on height data and extending it with additional steps to calculate the tilt of the object. The full pose is useful for picking objects with an industrial robot.</p><p>The tilt of the object is calculated using a RANSAC plane estimator. After the 2D search the differences in height between all corresponding points of the model and the live image are calculated. By estimating a plane to this difference the tilt of the object can be calculated. Using the tilt the model edges are tilted in order to improve the matching at the next scale level.</p><p>The problems that arise with occlusion and missing data have been studied. Missing data and erroneous data have been thresholded manually after conducting tests where automatic filling of missing data did not noticeably improve the matching. The automatic filling could introduce new false edges and remove true ones, thus lowering the score.</p><p>Experiments have been conducted where objects have been placed at increasing tilt angles. The results show that the matching algorithm is object dependent and correct matches are almost always found for tilt angles less than 10 degrees. This is very similar to the original 2D SBM because the model edges does not change much for such small angels. For tilt angles up to about 25 degrees most objects can be matched and for nice objects correct matches can be done at large tilt angles of up to 40 degrees.</p>
|
444 |
Output Power Calibration Methods for an EGPRS Mobile Platform / Metoder för uteffektskalibrering av en EGPRS mobilplattformEriksson, Hans January 2003 (has links)
<p>This thesis deals with output power calibration of a mobile platform that supports EGPRS.Two different topics are examined. First some different measurement methods are compared concerning cost efficiency, accuracy, and speed and later measurements are carried out on a mobile platform. </p><p>The output power from the mobile platform is controlled by three parameters and the influence on the output power when varying those parameters is investigated and presented. Furthermore, two methods of improving the speed of the calibration are presented. </p><p>The first one aims to decrease the number of bursts to average over as much as possible. The conclusion is that 10-20 bursts are enough for GMSK modulation and about five bursts for 8PSK modulation. The purpose of the second investigation is to examine the possibility to measure the output power in one modulation and frequency band, and then calculate the output power in the other bands. The conclusion in this case is that, based on the units investigated, it is possible for some values of the parameters and in some frequency bands. However, more units need to be included in the basic data for decision-making and it is possible that the hardware variation is too large.</p>
|
445 |
Automatic landmark detection on Trochanter Minor in x-ray images / Automatisk landmärkesdetektering på Trochanter Minor i röntgenbilderHolm, Per January 2005 (has links)
<p>During pre-operative planning for hip replacement, the choice of prosthesis can be aided by measurements in x-ray images of the hip. Some measurements can be done automatically but this require robust and precise image processing algorithms which can detect anatomical features. The Trochanter minor is an important landmark on the femoral shaft. In this thesis, three di.erent image processing algorithms are explained and tested for automatic landmark detection on Trochanter minor. The algorithms handled are Active Shape Models, Shortest path algorithm and a segmentation technique based on cumulated cost maps. The results indicate that cumulated cost maps are an e.ective tool for rough segmentation of the Trochanter Minor. A snake algorithm was then applied which could .nd the edge of the Trochanter minor in all images used in the test. The edge can be used to locate a curvature extremum which can be used as a landmark point.</p>
|
446 |
Automated Performance Optimization of GSM/EDGE Network Parameters / Automatiserad prestandaoptimering av GSM/EDGE-nätverksparametrarGustavsson, Jonas January 2009 (has links)
<p>The GSM network technology has been developed and improved during several years which have led to an increased complexity. The complexity results in more network parameters and together with different scenarios and situations they form a complex set of configurations. The definition of the network parameters is generally a manual process using static values during test execution. This practice can be costly, difficult and laborious and as the network complexity continues to increase, this problem will continue to grow.This thesis presents an implementation of an automated performance optimization algorithm that utilizes genetic algorithms for optimizing the network parameters. The implementation has been used for proving that the concept of automated optimization is working and most of the work has been carried out in order to use it in practice. The implementation has been applied to the Link Quality Control algorithm and the Improved ACK/NACK feature, which is an apart of GSM EDGE Evolution.</p> / <p>GSM-nätsteknologin har utvecklats och förbättrats under lång tid, vilket har lett till en ökad komplexitet. Denna ökade komplexitet har resulterat i fler nätverksparameterar, tillstånd och standarder. Tillsammans utgör de en komplex uppsättning av olika konfigurationer. Dessa nätverksparameterar har hittills huvudsakligen bestämts med hjälp av en manuell optimeringsprocess. Detta tillvägagångssätt är både dyrt, svårt och tidskrävande och allt eftersom komplexiteten av GSM-näten ökar kommer problemet att bli större.Detta examensarbete presenterar en implementering av en algoritm för automatiserad optimering av prestanda som huvudsakligen använder sig av genetiska algoritmer för att optimera värdet av nätverksparametrarna. Implementeringen har använts för att påvisa att konceptet med en automatiserad optimering fungerar och det mesta av arbetet har utförts för att kunna använda detta i praktiken. Implementeringen har tillämpats på Link Quality Control-algoritmen och Improved ACK/NACK-funktionaliteten, vilket är en del av GSM EDGE Evolution.</p>
|
447 |
Direction estimation on 3D-tomography images of jawbonesMazeyev, Yuri January 2008 (has links)
<p>The present work expose a technique of estimation of optimal direction for placing dental implant. A volumetric computed tomography (CT) scan is used as a help of the following searches. The work offers criteria of the optimal implant placement direction and methods of evaluation on direction’s significance. The technique utilizes structure tensor to find a normal to the jawbone surface. Direction of that normal is then used as initial direction for search of optimal direction.</p><p>The technique described in the present work aimed to support doctor’s decisions during dental implantation treatment.</p>
|
448 |
Online recruitment of cutting-edge users : A user experience study of Ericsson Labs developer portalAbramowicz, Sara January 2010 (has links)
<p>This thesis investigates how to reach and recruit cutting-edge users to user experience studies. The recruitment of cutting-edge users is difficult since these users usually are not registered in recruitment databases. Cutting-edge users are advanced, early-adopters of technology and sometimes referred to as opinion leaders. Telecom research projects performed at Ericsson Research involve products and services 2-3 years ahead of the market; early-adopters and cutting edge users are therefore an important user group.</p><p> </p><p>To test recruitment methods a user experience study was performed of Ericsson Labs developer portal. Ericsson Labs offers Application Programming Interfaces for mobile and web applications development. Internet marketing theories were used to form a recruitment method. Respondents were recruited from the Ericsson Labs user database and they were contacted individually via email. The users were invited to share their thoughts and ideas about the portal to help improve and possibly influence the direction of the site.</p><p> </p><p>This thesis also assessed different online qualitative research methods applied for user experience research. Online focus groups such as bulletin boards were used to interact with users in addition to individual chat and voice interviews. Performing user experience research on the Internet is a cost-efficient way to interact with users in geographically dispersed areas.</p><p> </p><p>The findings from the study show that recruitment is hard; it is especially difficult to recruit active and conversational respondents from a user database. Providing incentives and using personal communication were shown to be successful strategies to convince users to participate in a study.</p>
|
449 |
Finding Junctions Using the Image GradientBeymer, David J. 01 December 1991 (has links)
Junctions are the intersection points of three or more intensity surfaces in an image. An analysis of zero crossings and the gradient near junctions demonstrates that gradient-based edge detection schemes fragment edges at junctions. This fragmentation is caused by the intrinsic pairing of zero crossings and a destructive interference of edge gradients at junctions. Using the previous gradient analysis, we propose a junction detector that finds junctions in edge maps by following gradient ridges and using the minimum direction of saddle points in the gradient. The junction detector is demonstrated on real imagery and previous approaches to junction detection are discussed.
|
450 |
Approximation Algorithms for Covering Problems in Dense GraphsLevy, Eythan 06 March 2009 (has links)
We present a set of approximation results for several covering problems in dense graphs. These results show that for several problems, classical algorithms with constant approximation ratios can be analyzed in a finer way, and provide better constant approximation ratios under some density constraints. In particular, we show that the maximal matching heuristic approximates VERTEX COVER (VC) and MINIMUM MAXIMAL MATCHING (MMM) with a constant ratio strictly smaller than 2 when the proportion of edges present in the graph (weak density) is at least 3/4, or when the normalized minimum degree (strong density) is at least 1/2. We also show that this result can be improved by a greedy algorithm which provides a constant ratio smaller than 2 when the weak density is at least 1/2. We also provide tight families of graphs for all these approximation ratios. We then looked at several algorithms from the literature for VC and SET COVER (SC). We present a unified and critical approach to the Karpinski/Zelikovsky, Imamura/Iwama and Bar-Yehuda/Kehat algorithms, identifying the general the general scheme underlying these algorithms.
Finally, we look at the CONNECTED VERTEX COVER (CVC) problem,for which we proposed new approximation results in dense graphs. We first analyze Carla Savage's algorithm, then a new variant of the Karpinski-Zelikovsky algorithm. Our results show that these algorithms provide the same approximation ratios for CVC as the maximal matching heuristic and the Karpinski-Zelikovsky algorithm did for VC. We provide tight examples for the ratios guaranteed by both algorithms. We also introduce a new invariant, the "price of connectivity of VC", defined as the ratio between the optimal solutions of CVC and VC, and showed a nearly tight upper bound on its value as a function of the weak density. Our last chapter discusses software aspects, and presents the use of the GRAPHEDRON software in the framework of approximation algorithms, as well as our contributions to the development of this system.
/
Nous présentons un ensemble de résultats d'approximation pour plusieurs problèmes de couverture dans les graphes denses. Ces résultats montrent que pour plusieurs problèmes, des algorithmes classiques à facteur d'approximation constant peuvent être analysés de manière plus fine, et garantissent de meilleurs facteurs d'aproximation constants sous certaines contraintes de densité. Nous montrons en particulier que l'heuristique du matching maximal approxime les problèmes VERTEX COVER (VC) et MINIMUM MAXIMAL MATCHING (MMM) avec un facteur constant inférieur à 2 quand la proportion d'arêtes présentes dans le graphe (densité faible) est supérieure à 3/4 ou quand le degré minimum normalisé (densité forte) est supérieur à 1/2. Nous montrons également que ce résultat peut être amélioré par un algorithme de type GREEDY, qui fournit un facteur constant inférieur à 2 pour des densités faibles supérieures à 1/2. Nous donnons également des familles de graphes extrémaux pour nos facteurs d'approximation. Nous nous somme ensuite intéressés à plusieurs algorithmes de la littérature pour les problèmes VC et SET COVER (SC). Nous avons présenté une approche unifiée et critique des algorithmes de Karpinski-Zelikovsky, Imamura-Iwama, et Bar-Yehuda-Kehat, identifiant un schéma général dans lequel s'intègrent ces algorithmes.
Nous nous sommes finalement intéressés au problème CONNECTED VERTEX COVER (CVC), pour lequel nous avons proposé de nouveaux résultats d'approximation dans les graphes denses, au travers de l'algorithme de Carla Savage d'une part, et d'une nouvelle variante de l'algorithme de Karpinski-Zelikovsky d'autre part. Ces résultats montrent que nous pouvons obtenir pour CVC les mêmes facteurs d'approximation que ceux obtenus pour VC à l'aide de l'heuristique du matching maximal et de l'algorithme de Karpinski-Zelikovsky. Nous montrons également des familles de graphes extrémaux pour les ratios garantis par ces deux algorithmes. Nous avons également étudié un nouvel invariant, le coût de connectivité de VC, défini comme le rapport entre les solutions optimales de CVC et de VC, et montré une borne supérieure sur sa valeur en fonction de la densité faible. Notre dernier chapitre discute d'aspects logiciels, et présente l'utilisation du logiciel GRAPHEDRON dans le cadre des algorithmes d'approximation, ainsi que nos contributions au développement du logiciel.
|
Page generated in 0.0316 seconds