221 |
Feature based design for jigless assemblyNaing, Soe January 2004 (has links)
The work presented in this thesis was undertaken as part of the three-year ‘Jigless Aerospace Manufacture’ (JAM) project which was set-up to investigate and address the significant scientific, technological and economic issues to enable a new design, manufacture and assembly philosophy based on minimising product specific jigs, fixtures and tooling. The main goal of the JAM project at Cranfield was the development of appropriate jigless methods and principles, and the subsequent redesign of the JAM project demonstrator structure – a section of the Airbus A320 aircraft Fixed Leading Edge – to fully investigate and realise the capabilities of jigless methodologies and principles. The particular focus of research activity described in this thesis was the development of a methodology to design for jigless assembly and a process of selecting assembly features to enable jigless assembly. A review of the literature has shown that no methodologies exist to specifically design for jigless assembly; however, previous relevant research has been built upon and extended with the incorporation of novel tools and techniques. To facilitate the assembly feature selection process for jigless assembly, an Assembly Feature Library was created that broadened and expanded the conventional definition and use of assembly features. The developed methodology, assembly feature selection process and Feature Library have been applied and validated on the JAM project demonstrator structure to serve as a Case Study for the tools and techniques developed by the research. Additionally, a Costing Analysis was carried out which suggests that the use of the tools and techniques to enable jigless assembly could have a large and considerable impact on both the Non-Recurring and Recurring costs associated with the design, manufacture and assembly of aircraft.
|
222 |
Multi-scale edge-guided image gap restorationLangari, Bahareh January 2016 (has links)
The focus of this research work is the estimation of gaps (missing blocks) in digital images. To progress the research two main issues were identified as (1) the appropriate domains for image gap restoration and (2) the methodologies for gap interpolation. Multi-scale transforms provide an appropriate framework for gap restoration. The main advantages are transformations into a set of frequency and scales and the ability to progressively reduce the size of the gap to one sample wide at the transform apex. Two types of multi-scale transform were considered for comparative evaluation; 2-dimensional (2D) discrete cosines (DCT) pyramid and 2D discrete wavelets (DWT). For image gap estimation, a family of conventional weighted interpolators and directional edge-guided interpolators are developed and evaluated. Two types of edges were considered; ‘local’ edges or textures and ‘global’ edges such as the boundaries between objects or within/across patterns in the image. For local edge, or texture, modelling a number of methods were explored which aim to reconstruct a set of gradients across the restored gap as those computed from the known neighbourhood. These differential gradients are estimated along the geometrical vertical, horizontal and cross directions for each pixel of the gap. The edge-guided interpolators aim to operate on distinct regions confined within edge lines. For global edge-guided interpolation, two main methods explored are Sobel and Canny detectors. The latter provides improved edge detection. The combination and integration of different multi-scale domains, local edge interpolators, global edge-guided interpolators and iterative estimation of edges provided a variety of configurations that were comparatively explored and evaluated. For evaluation a set of images commonly used in the literature work were employed together with simulated regular and random image gaps at a variety of loss rate. The performance measures used are the peak signal to noise ratio (PSNR) and structure similarity index (SSIM). The results obtained are better than the state of the art reported in the literature.
|
223 |
Desenvolvimento de um sistema automatizado para a caracterização espacial de feixes lasers / Development of an automated system for the spatial laser beam characterization.Moisés Oliveira dos Santos 24 August 2007 (has links)
A demanda por qualidade nas aplicações envolvendo radiação laser exigiu melhorias no seu desempenho. Conseqüentemente, equipamentos mais rápidos e precisos nas medidas dos seus parâmetros são indispensáveis. Nas áreas onde o laser é empregado, três parâmetros sobressaem-se nas suas aplicações: (1) potência ou energia, (2) freqüência e (3) comprimento espacial ou largura do feixe. A determinação das bordas, isto é, da largura do feixe, está ligada a um percentual do máximo valor atingido pela energia. O diâmetro do feixe, juntamente com a energia, determina-se a densidade do feixe. Outros parâmetros como: divergência e fator de qualidade (M2), podem ser determinados também. O presente trabalho busca desenvolver um sistema de translação bidimensional que possa ser empregado na caracterização espacial do feixe de lasers. Para determinar o perfil do feixe de lasers utiliza-se o método borda-da-lâmina (knife-edge), relacionando o deslocamento da lâmina posicionada transversalmente ao feixe, com a energia transmitida. Obstruindo o feixe com uma lâmina opaca, obtêm-se a variação da energia do feixe em função da posição da lâmina. Esta variação representa a integral do perfil Gaussiano do feixe. Para a automação do sistema foi empregado o programa Labview (National Instruments). O funcionamento do protótipo mostrou-se eficiente na caracterização de feixes laser e com uma instrumentação de baixo custo para a comercialização nacional. No entanto, apresentou-se lento na aquisição de dados, tornando a tarefa de caracterização do laser mais demorada. Fatores como velocidade do motor de passo e linguagem de programação contribuíram para tornar a aquisição lenta. / The demand for quality in the applications involving laser radiation demanded improvements in its performance. Faster equipments in the measures of its parameters are indispensable. In the areas where the laser is employee, three parameters are important in its applications: (1) power or energy, (2) frequency and (3) beam spatial. The determination of the edges, i. e., the width of the beam, is correlated to a percentage of the maximum reached energy. This parameter, together with the energy, determines beam density; beyond this parameter it also possible to determine the divergence and quality factor (M2). This work searches to develop a system of bi-dimension translation that can be used in the spatial characterization of laser beam. To determine the profile of laser beam it is used the knife-edge method, that it relates the displacement of the blade located transversally to the beam, with the transmitted energy. Blocking the beam with a blade the energy variation of the beam is correlated with blade position; this variation represents the integral of the beam Gaussian profile. For the automation it will be used the Labview program(National Instruments). The prototype had showed to be efficient in the characterization of laser beams and a low cost for national commercialization; however was slow in the data acquisition resulting a longer time to acquire the laser parameters. Instrumental components as step motors or programming language had contributed to slowly acquisition.
|
224 |
O uso do EDGE nos sistemas celulares em direção à 3ª geraçãoSantana Aguiar, Elisangela January 2002 (has links)
Made available in DSpace on 2014-06-12T15:59:19Z (GMT). No. of bitstreams: 2
arquivo5031_1.pdf: 3343228 bytes, checksum: 5473b22f6502ccef2e12e259b590641d (MD5)
license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5)
Previous issue date: 2002 / Atualmente o mercado das comunicações móveis está sendo dirigido pela necessidade dos
serviços de dados. Entre os sistemas da primeira e segunda geração (1G e 2G), o GSM (Global
System Mobile) é indiscutivelmente o mais utilizado no mundo para aplicações de voz, porém,
continua oferecendo serviços de dados com baixas taxas de transmissão, as quais em paralelo
com as baixas capacidades dos sistemas, são os principais problemas do progresso da multimídia
móvel. Além disso, os serviços de dados são caracterizados pela necessidade de grandes larguras
de banda.
Sendo assim, o GSM que originalmente foi desenvolvido para transmissão de voz e serviços
de dados com baixas taxas, está rapidamente sendo atualizado para incorporar novos serviços
multimídia. Na sua geração intermediária 2,5G, com o HSCSD (High Speed Circuit Switching
Service Data) e o GPRS (General Packet Radio Service), possuirão suas taxas de dados
aumentadas em três vezes com a introdução de suas versões melhoradas, ECSD (Enhanced
Circuit Switched Data) e EGPRS (Enhanced GPRS), juntos esses dois sistemas são denominados
de EDGE (Enhanced Data Rate for Global Evolution).
O EDGE utilizará uma modulação de alto nível denominada 8PSK (8 Phase Shift Keying)
em conjunto com a GMSK (Gaussian Minimum Shift Keying), utilizada pelo GPRS, também
usará esquemas de codificação mais eficientes e mecanismos de controle de qualidade do link, IR
(Incremental Redundancy) e LA (Link Adaptation), os quais trazem benefícios quando utilizados
em boas condições de propagação.
Este trabalho trata da evolução dos serviços de dados, em especial da 2,5G, concentrando-se
no estudo do EDGE, mais especificamente do EGPRS, com a abordagem dos seus principais
aspectos e características. Foram realizados estudos considerando as alocações single e multislot,
suas especificações, para a transmissão de diferentes modelos de dados, Funet, Railway e
Mobitex, entre PCU (Packet Control Unit) e MSs (Mobile Stations). Desenvolveu-se um
protótipo com o objetivo de simular esse nível de abstração e testar um algoritmo para a
otimização das alocações, de forma a permitir o melhor estudo e a análise do desempenho do
sistema
|
225 |
Pedestrian Detection in Low Quality Moving Camera VideosHinduja, Saurabh 25 October 2016 (has links)
Pedestrian detection is one of the most researched areas in computer vision and is rapidly gaining importance with the emergence of autonomous vehicles and steering assistance technology. Much work has been done in this field, ranging from the collection of extensive datasets to benchmarking of new technologies, but all the research depends on high-quality hardware such as high-resolution cameras, Light Detection and Ranging (LIDAR) and radar.
For detection in low-quality moving camera videos, we use image deblurring techniques to reconstruct image frames and use existing pedestrian detection algorithms and compare our results with the leading research done in this area.
|
226 |
Feature Extraction From Images of Buildings Using Edge Orientation and LengthDanhall, Viktor January 2012 (has links)
To extract information from a scene captured in digital images where the information represents some kind of feature is an important process in image analysis. Both the speed and the accuracy for this process is very important since many of the analysis applications either require analysis of very large data sets or requires the data to be extracted in real time. Some of those applications could be 2 dimensional as well as 3 dimensional object recognition or motion detection. What this work will focus on is the extraction of salient features from scenes of buildings, using a joint histogram based both the edge orientation and the edge length to aid in the extraction of the relevant features. The results are promising but will need some more refinement work to be used successfully and is therefore quite a bit of reflected theory.
|
227 |
Software-Defined Computational Offloading for Mobile Edge ComputingKrishna, Nitesh 03 May 2018 (has links)
Computational offloading advances the deployment of Mobile Edge Computing (MEC) in the next generation communication networks. However, the distributed nature of the mobile users and the complex applications make it challenging to schedule the tasks reasonably among multiple devices. Therefore, by leveraging the idea of Software-Defined Networking (SDN) and Service Composition (SC), we propose a Software-Defined Service Composition model (SDSC). In this model, the SDSC controller is deployed at the edge of the network and composes service in a centralized manner to reduce the latency of the task execution and the traffic on the access links by satisfying the user-specific requirement. We formulate the low latency service composition as a Constraint Satisfaction Problem (CSP) to make it a user-centric approach. With the advent of the SDN, the global view and the control of the entire network are made available to the network controller which is further leveraged by our SDSC approach.
Furthermore, the service discovery and the offloading of tasks are designed for MEC environment so that the users can have a complex and robust system. Moreover, this approach performs the task execution in a distributed manner. We also define the QoS model which provides the composition rule that forms the best possible service composition at the time of need.
Moreover, we have extended our SDSC model to involve the constant mobility of the mobile devices. To solve the mobility issue, we propose a mobility model and a mobility-aware QoS approach enabled in the SDSC model. The experimental simulation results demonstrate that our approach can obtain better performance than the energy saving greedy algorithm and the random offloading approach in a mobile environment.
|
228 |
Rozložitelnost grafů na souvislé podgrafy / Decompositions of graphs into connected subgraphsMusílek, Jan January 2015 (has links)
In 2003 at Eurocomb conference J. Barát and C. Thomassen presented definition and basic results in edge partitioning of graphs. Edge partitioning is basically possibility to cover edges of the graph using connected subgraphs of prescribed size. Graph has edge partitioning property if and only if it can be covered for all prescribed subgraphs sizes. Our work is focused on edge partitioning, in which there are less results known, compared to vertex partitioning. We proof, that edge partitioning is implied by existence of open dominating trail and therefore with edge 4-connectivity. We also define limited version of edge partitioning, spectrum of partitioning and we proof some claims that are true for all graphs. We also explore limited partitioning on some specific classes of graphs.
|
229 |
Content Delivery in Fog-Aided Small-Cell Systems with Offline and Online Caching: An Information—Theoretic AnalysisAzimi, Seyyed, Simeone, Osvaldo, Tandon, Ravi 18 July 2017 (has links)
The storage of frequently requested multimedia content at small-cell base stations (BSs) can reduce the load of macro-BSs without relying on high-speed backhaul links. In this work, the optimal operation of a system consisting of a cache-aided small-cell BS and a macro-BS is investigated for both offline and online caching settings. In particular, a binary fading one-sided interference channel is considered in which the small-cell BS, whose transmission is interfered by the macro-BS, has a limited-capacity cache. The delivery time per bit (DTB) is adopted as a measure of the coding latency, that is, the duration of the transmission block, required for reliable delivery. For offline caching, assuming a static set of popular contents, the minimum achievable DTB is characterized through information-theoretic achievability and converse arguments as a function of the cache capacity and of the capacity of the backhaul link connecting cloud and small-cell BS. For online caching, under a time-varying set of popular contents, the long-term (average) DTB is evaluated for both proactive and reactive caching policies. Furthermore, a converse argument is developed to characterize the minimum achievable long-term DTB for online caching in terms of the minimum achievable DTB for offline caching. The performance of both online and offline caching is finally compared using numerical results.
|
230 |
Adaptive Distributed Caching for Scalable Machine Learning ServicesDrolia, Utsav 01 August 2017 (has links)
Applications for Internet-enabled devices use machine learning to process captured data to make intelligent decisions or provide information to users. Typically, the computation to process the data is executed in cloud-based backends. The devices are used for sensing data, offloading it to the cloud, receiving responses and acting upon them. However, this approach leads to high end-to-end latency due to communication over the Internet. This dissertation proposes reducing this response time by minimizing offloading, and pushing computation close to the source of the data, i.e. to edge servers and devices themselves. To adapt to the resource constrained environment at the edge, it presents an approach that leverages spatiotemporal locality to push subparts of the model to the edge. This approach is embodied in a distributed caching framework, Cachier. Cachier is built upon a novel caching model for recognition, and is distributed across edge servers and devices. The analytical caching model for recognition provides a formulation for expected latency for recognition requests in Cachier. The formulation incorporates the effects of compute time and accuracy. It also incorporates network conditions, thus providing a method to compute expected response times under various conditions. This is utilized as a cost function by Cachier, at edge servers and devices. By analyzing requests at the edge server, Cachier caches relevant parts of the trained model at edge servers, which is used to respond to requests, minimizing the number of requests that go to the cloud. Then, Cachier uses context-aware prediction to prefetch parts of the trained model onto devices. The requests can then be processed on the devices, thus minimizing the number of offloaded requests. Finally, Cachier enables cooperation between nearby devices to allow exchanging prefetched data, reducing the dependence on remote servers even further. The efficacy of Cachier is evaluated by using it with an art recognition application. The application is driven using real world traces gathered at museums. By conducting a large-scale study with different control variables, we show that Cachier can lower latency, increase scalability and decrease infrastructure resource usage, while maintaining high accuracy.
|
Page generated in 0.0351 seconds