Spelling suggestions: "subject:"visionbased"" "subject:"visionbased""
41 |
Contributions to active visual estimation and control of robotic systems / Contributions à la perception active et à la commande de systèmes robotiquesSpica, Riccardo 11 December 2015 (has links)
L'exécution d'une expérience scientifique est un processus qui nécessite une phase de préparation minutieuse et approfondie. Le but de cette phase est de s'assurer que l'expérience donne effectivement le plus de renseignements possibles sur le processus que l'on est en train d'observer, de manière à minimiser l'effort (en termes, par exemple, du nombre d'essais ou de la durée de chaque expérience) nécessaire pour parvenir à une conclusion digne de confiance. De manière similaire, la perception est un processus actif dans lequel l'agent percevant (que ce soit un humain, un animal ou un robot) fait de son mieux pour maximiser la quantité d'informations acquises sur l'environnement en utilisant ses capacités de détection et ses ressources limitées. Dans de nombreuses applications robotisées, l'état d'un robot peut être partiellement récupéré par ses capteurs embarqués. Des schémas d'estimation peuvent être exploités pour récupérer en ligne les «informations manquantes» et les fournir à des planificateurs/contrôleurs de mouvement, à la place des états réels non mesurables. Cependant, l'estimation doit souvent faire face aux relations non linéaires entre l'environnement et les mesures des capteurs qui font que la convergence et la précision de l'estimation sont fortement affectées par la trajectoire suivie par le robot/capteur. Par exemple, les techniques de commande basées sur la vision, telles que l'Asservissement Visuel Basé-Image (IBVS), exigent normalement une certaine connaissance de la structure 3-D de la scène qui ne peut pas être extraite directement à partir d'une seule image acquise par la caméra. On peut exploiter un processus d'estimation (“Structure from Motion - SfM”) pour reconstruire ces informations manquantes. Toutefois, les performances d'un estimateur SfM sont grandement affectées par la trajectoire suivie par la caméra pendant l'estimation, créant ainsi un fort couplage entre mouvement de la caméra (nécessaire pour, par exemple, réaliser une tâche visuelle) et performance/précision de l'estimation 3-D. À cet égard, une contribution de cette thèse est le développement d'une stratégie d'optimisation en ligne de trajectoire qui permet de maximiser le taux de convergence d'un estimateur SfM affectant (activement) le mouvement de la caméra. L'optimisation est basée sur des conditions classiques de persistance d'excitation utilisée en commande adaptative pour caractériser le conditionnement d'un problème d'estimation. Cette mesure est aussi fortement liée à la matrice d'information de Fisher employée dans le cadre d'estimation probabiliste à des fins similaires. Nous montrons aussi comment cette technique peut être couplé avec l'exécution simultanée d'une tâche d'asservissement visuel en utilisant des techniques de résolution et de maximisation de la redondance. Tous les résultats théoriques présentés dans cette thèse sont validés par une vaste campagne expérimentale en utilisant un robot manipulateur équipé d'une caméra embarquée. / As every scientist and engineer knows, running an experiment requires a careful and thorough planning phase. The goal of such a phase is to ensure that the experiment will give the scientist as much information as possible about the process that she/he is observing so as to minimize the experimental effort (in terms of, e.g., number of trials, duration of each experiment and so on) needed to reach a trustworthy conclusion. Similarly, perception is an active process in which the perceiving agent (be it a human, an animal or a robot) tries its best to maximize the amount of information acquired about the environment using its limited sensor capabilities and resources. In many sensor-based robot applications, the state of a robot can only be partially retrieved from his on-board sensors. State estimation schemes can be exploited for recovering online the “missing information” then fed to any planner/motion controller in place of the actual unmeasurable states. When considering non-trivial cases, however, state estimation must often cope with the nonlinear sensor mappings from the observed environment to the sensor space that make the estimation convergence and accuracy strongly affected by the particular trajectory followed by the robot/sensor. For instance, when relying on vision-based control techniques, such as Image-Based Visual Servoing (IBVS), some knowledge about the 3-D structure of the scene is needed for a correct execution of the task. However, this 3-D information cannot, in general, be extracted from a single camera image without additional assumptions on the scene. One can exploit a Structure from Motion (SfM) estimation process for reconstructing this missing 3-D information. However performance of any SfM estimator is known to be highly affected by the trajectory followed by the camera during the estimation process, thus creating a tight coupling between camera motion (needed to, e.g., realize a visual task) and performance/accuracy of the estimated 3-D structure. In this context, a main contribution of this thesis is the development of an online trajectory optimization strategy that allows maximization of the converge rate of a SfM estimator by (actively) affecting the camera motion. The optimization is based on the classical persistence of excitation condition used in the adaptive control literature to characterize the well-posedness of an estimation problem. This metric, however, is also strongly related to the Fisher information matrix employed in probabilistic estimation frameworks for similar purposes. We also show how this technique can be coupled with the concurrent execution of a IBVS task using appropriate redundancy resolution and maximization techniques. All of the theoretical results presented in this thesis are validated by an extensive experimental campaign run using a real robotic manipulator equipped with a camera in-hand.
|
42 |
Spatial Multimedia Data VisualizationJAMONNAK, SUPHANUT 30 November 2021 (has links)
No description available.
|
43 |
DEVELOPMENT OF MULTIMODAL FUSION-BASED VISUAL DATA ANALYTICS FOR ROBOTIC INSPECTION AND CONDITION ASSESSMENTTarutal Ghosh Mondal (11775980) 01 December 2021 (has links)
<div>This dissertation broadly focuses on autonomous condition assessment of civil infrastructures using vision-based methods, which present a plausible alternative to existing manual techniques. A region-based convolutional neural network (Faster R-CNN) is exploited for the detection of various earthquake-induced damages in reinforced concrete buildings. Four different damage categories are considered such as surface crack, spalling, spalling with exposed rebars, and severely buckled rebars. The performance of the model is evaluated on image data collected from buildings damaged under several past earthquakes taking place in different parts of the world. The proposed algorithm can be integrated with inspection drones or mobile robotic platforms for quick assessment of damaged buildings leading to expeditious planning of retrofit operations, minimization of damage cost, and timely restoration of essential services. </div><div><br></div><div> </div><div> Besides, a computer vision-based approach is presented to track the evolution of a damage over time by analysing historical visual inspection data. Once a defect is detected in a recent inspection data set, its spatial correspondences in the data collected during previous rounds of inspection are identified leveraging popular computer vision-based techniques. A single reconstructed view is then generated for each inspection round by synthesizing the candidate corresponding images. The chronology of damage thus established facilitates time-based quantification and lucid visual interpretation. This study is likely to enhance the efficiency structural inspection by introducing the time dimension into the autonomous condition assessment pipeline.</div><div><br></div><div> </div><div> Additionally, this dissertation incorporates depth fusion into a CNN-based semantic segmentation model. A 3D animation and visual effect software is exploited to generate a synthetic database of spatially aligned RGB and depth image pairs representing various damage categories which are commonly observed in reinforced concrete buildings. A number of encoding techniques are explored for representing the depth data. Besides, various schemes for fusion of RGB and depth data are investigated to identify the best fusion strategy. It was observed that depth fusion enhances the performance of deep learning-based damage segmentation algorithms significantly. Furthermore, strategies are proposed to manufacture depth information from corresponding RGB frame, which eliminates the need of depth sensing at the time of deployment without compromising on segmentation performance. Overall, the scientific research presented in this dissertation will be a stepping stone towards realizing a fully autonomous structural condition assessment pipeline.</div>
|
44 |
Saliency grouped landmarks for use in vision-based simultaneous localisation and mappingJoubert, Deon January 2013 (has links)
The effective application of mobile robotics requires that robots be able to perform tasks with an
extended degree of autonomy. Simultaneous localisation and mapping (SLAM) aids automation by
providing a robot with the means of exploring an unknown environment while being able to position
itself within this environment. Vision-based SLAM benefits from the large amounts of data produced
by cameras but requires intensive processing of these data to obtain useful information. In this dissertation
it is proposed that, as the saliency content of an image distils a large amount of the information
present, it can be used to benefit vision-based SLAM implementations.
The proposal is investigated by developing a new landmark for use in SLAM. Image keypoints are
grouped together according to the saliency content of an image to form the new landmark. A SLAM
system utilising this new landmark is implemented in order to demonstrate the viability of using the
landmark. The landmark extraction, data filtering and data association routines necessary to make
use of the landmark are discussed in detail. A Microsoft Kinect is used to obtain video images as
well as 3D information of a viewed scene. The system is evaluated using computer simulations and
real-world datasets from indoor structured environments. The datasets used are both newly generated
and freely available benchmarking ones. / Dissertation (MEng)--University of Pretoria, 2013. / gm2014 / Electrical, Electronic and Computer Engineering / unrestricted
|
45 |
Obstacle Avoidance, Visual Automatic Target Tracking, and Task Allocation for Small Unmanned Air VehiclesSaunders, Jeffery Brian 10 July 2009 (has links) (PDF)
Recent developments in autopilot technology have increased the potential use of micro air vehicles (MAVs). As the number of applications increase, the demand on MAVs to operate autonomously in any scenario increases. Currently, MAVs cannot reliably fly in cluttered environments because of the difficulty to detect and avoid obstacles. The main contribution of this research is to offer obstacle detection and avoidance strategies using laser rangers and cameras coupled with computer vision processing. In addition, we explore methods of visual target tracking and task allocation. Utilizing a laser ranger, we develop a dynamic geometric guidance strategy to generate paths around detected obstacles. The strategy overrides a waypoint planner in the presence of pop-up-obstacles. We develop a second guidance strategy that oscillates the MAV around the waypoint path and guarantees obstacle detection and avoidance. Both rely on a laser ranger for obstacles detection and are demonstrated in simulation and in flight tests. Utilizing EO/IR cameras, we develop two guidance strategies based on movement of obstacles in the camera field-of-view to maneuver the MAV around pop-up obstacles. Vision processing available on a ground station provides range and bearing to nearby obstacles. The first guidance law is derived for single obstacle avoidance and pushes the obstacle to the edge of the camera field-of-view causing the vehicle to avoid a collision. The second guidance law is derived for two obstacles and balances the obstacles on opposite edges of the camera field-of-view, maneuvering between the obstacles. The guidance strategies are demonstrated in simulation and flight tests. This research also addresses the problem of tracking a ground based target with a fixed camera pointing out the wing of a MAV that is subjected to constant wind. Rather than planning explicit trajectories for the vehicle, a visual feedback guidance strategy is developed that maintains the target in the field-of-view of the camera. We show that under ideal conditions, the resulting flight paths are optimal elliptical trajectories if the target is forced to the center of the image plane. Using simulation and flight tests, the resulting algorithm is shown to be robust with respect to gusts and vehicle modeling errors. Lastly, we develop a method of a priori collision avoidance in assigning multiple tasks to cooperative unmanned air vehicles (UAV). The problem is posed as a combinatorial optimization problem. A branch and bound tree search algorithm is implemented to find a feasible solution using a cost function integrating distance traveled and proximity to other UAVs. The results demonstrate that the resulting path is near optimal with respect to distance traveled and includes a significant increase in expected proximity distance to other UAVs. The algorithm runs in less than a tenth of a second allowing on-the-fly replanning.
|
46 |
Vision-Based Obstacle Avoidance for Multiple Vehicles Performing Time-Critical MissionsDippold, Amanda 11 June 2009 (has links)
This dissertation discusses vision-based static obstacle avoidance for a fleet of nonholonomic robots tasked to arrive at a final destination simultaneously. Path generation for each vehicle is computed using a single polynomial function that incorporates the vehicle constraints on velocity and acceleration and satisfies boundary conditions by construction. Furthermore, the arrival criterion and a preliminary obstacle avoidance scheme is incorporated into the path generation. Each robot is equipped with an inertial measurement unit that provides measurements of the vehicle's position and velocity, and a monocular camera that detects obstacles. The obstacle avoidance algorithm deforms the vehicle's original path around at most one obstacle per vehicle in a direction that minimizes an obstacle avoidance potential function. Deconfliction of the vehicles during obstacle avoidance is achieved by imposing a separation condition at the path generation level. Two estimation schemes are applied to estimate the unknown obstacle parameters. The first is an existing method known in the literature as Identifier-Based Observer and the second is a recently-developed fast estimator. It is shown that the performance of the fast estimator and its effect on the obstacle avoidance algorithm can be arbitrarily improved by the appropriate choice of parameters as compared to the Identifier-Based Observer method. Coordination in time of all vehicles is completed in an outer loop which adjusts the desired velocity profile of each vehicle in order to meet the simultaneous arrival constraints. Simulation results illustrate the theoretical findings. / Ph. D.
|
47 |
Image texture analysis for inferential sensing in the process industriesKistner, Melissa 12 1900 (has links)
Thesis (MScEng)-- Stellenbosch University, 2013. / ENGLISH ABSTRACT: The measurement of key process quality variables is important for the efficient and economical operation of many chemical and mineral processing systems, as these variables can be used in process monitoring and control systems to identify and maintain optimal process conditions. However, in many engineering processes the key quality variables cannot be measured directly with standard sensors. Inferential sensing is the real-time prediction of such variables from other, measurable process variables through some form of model.
In vision-based inferential sensing, visual process data in the form of images or video frames are used as input variables to the inferential sensor. This is a suitable approach when the desired process quality variable is correlated with the visual appearance of the process. The inferential sensor model is then based on analysis of the image data.
Texture feature extraction is an image analysis approach by which the texture or spatial organisation of pixels in an image can be described. Two texture feature extraction methods, namely the use of grey-level co-occurrence matrices (GLCMs) and wavelet analysis, have predominated in applications of texture analysis to engineering processes. While these two baseline methods are still widely considered to be the best available texture analysis methods, several newer and more advanced methods have since been developed, which have properties that should theoretically provide these methods with some advantages over the baseline methods. Specifically, three advanced texture analysis methods have received much attention in recent machine vision literature, but have not yet been applied extensively to process engineering applications: steerable pyramids, textons and local binary patterns (LBPs).
The purpose of this study was to compare the use of advanced image texture analysis methods to baseline texture analysis methods for the prediction of key process quality variables in specific process engineering applications. Three case studies, in which texture is thought to play an important role, were considered: (i) the prediction of platinum grade classes from images of platinum flotation froths, (ii) the prediction of fines fraction classes from images of coal particles on a conveyor belt, and (iii) the prediction of mean particle size classes from images of hydrocyclone underflows.
Each of the five texture feature sets were used as inputs to two different classifiers (K-nearest neighbours and discriminant analysis) to predict the output variable classes for each of the three case studies mentioned above. The quality of the features extracted with each method was assessed in a structured manner, based their classification performances after the optimisation of the hyperparameters associated with each method.
In the platinum froth flotation case study, steerable pyramids and LBPs significantly outperformed the GLCM, wavelet and texton methods. In the case study of coal fines fractions, the GLCM method was significantly outperformed by all four other methods. Finally, in the hydrocyclone underflow case study, steerable pyramids and LBPs significantly outperformed GLCM and wavelet methods, while the result for textons was inconclusive.
Considering all of these results together, the overall conclusion was drawn that two of the three advanced texture feature extraction methods, namely steerable pyramids and LBPs, can extract feature sets of superior quality, when compared to the baseline GLCM and wavelet methods in these three case studies. The application of steerable pyramids and LBPs to further image analysis data sets is therefore recommended as a viable alternative to the traditional GLCM and wavelet texture analysis methods. / AFRIKAANSE OPSOMMING: Die meting van sleutelproseskwaliteitsveranderlikes is belangrik vir die doeltreffende en ekono-miese werking van baie chemiese– en mineraalprosesseringsisteme, aangesien hierdie verander-likes gebruik kan word in prosesmonitering– en beheerstelsels om die optimale prosestoestande te identifiseer en te handhaaf. In baie ingenieursprosesse kan die sleutelproseskwaliteits-veranderlikes egter nie direk met standaard sensors gemeet word nie. Inferensiële waarneming is die intydse voorspelling van sulke veranderlikes vanaf ander, meetbare prosesveranderlikes deur van ‘n model gebruik te maak.
In beeldgebaseerde inferensiële waarneming word visuele prosesdata, in die vorm van beelde of videogrepe, gebruik as insetveranderlikes vir die inferensiële sensor. Hierdie is ‘n gepaste benadering wanneer die verlangde proseskwaliteitsveranderlike met die visuele voorkoms van die proses gekorreleer is. Die inferensiële sensormodel word dan gebaseer op die analise van die beelddata.
Tekstuurkenmerkekstraksie is ‘n beeldanalisebenadering waarmee die tekstuur of ruimtelike organisering van die beeldelemente beskryf kan word. Twee tekstuurkenmerkekstraksiemetodes, naamlik die gebruik van grysskaalmede-aanwesigheidsmatrikse (GSMMs) en golfie-analise, is sterk verteenwoordig in ingenieursprosestoepassings van tekstuuranalise. Alhoewel hierdie twee grondlynmetodes steeds algemeen as die beste beskikbare tekstuuranalisemetodes beskou word, is daar sedertdien verskeie nuwer en meer gevorderde metodes ontwikkel, wat beskik oor eienskappe wat teoreties voordele vir hierdie metodes teenoor die grondlynmetodes behoort te verskaf. Meer spesifiek is daar drie gevorderde tekstuuranalisemetodes wat baie aandag in onlangse masjienvisieliteratuur geniet het, maar wat nog nie baie op ingenieursprosesse toegepas is nie: stuurbare piramiedes, tekstons en lokale binêre patrone (LBPs).
Die doel van hierdie studie was om die gebruik van gevorderde tekstuuranalisemetodes te vergelyk met grondlyntekstuuranaliesemetodes vir die voorspelling van sleutelproseskwaliteits-veranderlikes in spesifieke prosesingenieurstoepassings. Drie gevallestudies, waarin tekstuur ‘n belangrike rol behoort te speel, is ondersoek: (i) die voorspelling van platinumgraadklasse vanaf beelde van platinumflottasieskuime, (ii) die voorspelling van fynfraksieklasse vanaf beelde van steenkoolpartikels op ‘n vervoerband, en (iii) die voorspelling van gemiddelde partikelgrootteklasse vanaf beelde van hidrosikloon ondervloeie.
Elk van die vyf tekstuurkenmerkstelle is as insette vir twee verskillende klassifiseerders (K-naaste bure en diskriminantanalise) gebruik om die klasse van die uitsetveranderlikes te voorspeel, vir elk van die drie gevallestudies hierbo genoem. Die kwaliteit van die kenmerke wat deur elke metode ge-ekstraheer is, is op ‘n gestruktureerde manier bepaal, gebaseer op hul klassifikasieprestasie na die optimering van die hiperparameters wat verbonde is aan elke metode. In die platinumskuimflottasiegevallestudie het stuurbare piramiedes en LBPs betekenisvol beter as die GSMM–, golfie– en tekstonmetodes presteer. In die steenkoolfynfraksiegevallestudie het die GSMM-metode betekenisvol slegter as al vier ander metodes presteer. Laastens, in die hidrosikloon ondervloeigevallestudie het stuurbare piramiedes en LBPs betekenisvol beter as die GSMM– en golfiemetodes presteer, terwyl die resultaat vir tekstons nie beslissend was nie.
Deur al hierdie resultate gesamentlik te beskou, is die oorkoepelende gevolgtrekking gemaak dat twee van die drie gevorderde tekstuurkenmerkekstraksiemetodes, naamlik stuurbare piramiedes en LBPs, hoër kwaliteit kenmerkstelle kan ekstraheer in vergelyking met die GSMM– en golfiemetodes, vir hierdie drie gevallestudies. Die toepassing van stuurbare piramiedes en LBPs op verdere beeldanalise-datastelle word dus aanbeveel as ‘n lewensvatbare alternatief tot die tradisionele GSMM– en golfietekstuuranalisemetodes.
|
48 |
Change your Perspective : Exploration of a 3D Network created with Open Data in an Immersive Virtual Reality Environment using a Head-mounted Display and Vision-based Motion ControlsReski, Nico January 2015 (has links)
Year after year, technologies are evolving in an incredible rapid pace, becoming faster, more complex, more accurate and more immersive. Looking back just a decade, especially interaction technologies have made a major leap. Just two years ago in 2013, after being researched for quite some time, the hype around virtual reality (VR) arouse renewed enthusiasm, finally reaching mainstream attention as the so called head-mounted displays (HMD), devices worn on the head to grant a visual peek into the virtual world, gain more and more acceptance with the end-user. Currently, humans interact with computers in a very counter-intuitive two dimensional way. The ability to experience digital content in the humans most natural manner, by simply looking around and perceiving information from their surroundings, has the potential to be a major game changer in how we perceive and eventually interact with digital information. However, this confronts designers and developers with new challenges of how to apply these exciting technologies, supporting interaction mechanisms to naturally explore digital information in the virtual world, ultimately overcoming real world boundaries. Within the virtual world, the only limit is our imagination. This thesis investigates an approach of how to naturally interact and explore information based on open data within an immersive virtual reality environment using a head-mounted display and vision-based motion controls. For this purpose, an immersive VR application visualizing information as a network of European capital cities has been implemented, offering interaction through gesture input. The application lays a major focus on the exploration of the generated network and the consumption of the displayed information. While the conducted user interaction study with eleven participants investigated their acceptance of the developed prototype, estimating their workload and examining their explorative behaviour, the additional dialog with five experts in the form of explorative discussions provided further feedback towards the prototype’s design and concept. The results indicate the participants’ enthusiasm and excitement towards the novelty and intuitiveness of exploring information in a less traditional way than before, while challenging them with the applied interface and interaction design in a positive manner. The design and concept were also accepted through the experts, valuing the idea and implementation. They provided constructive feedback towards the visualization of the information as well as emphasising and encouraging to be even bolder, making more usage of the available 3D environment. Finally, the thesis discusses these findings and proposes recommendations for future work.
|
49 |
Proposta de elaboração de um modelo de gestão estratégica para uma empresa de serviços de saúde : Irradial : Centro de Diagnóstico por ImagemChemale, Letícia Sbardelotto 31 July 2013 (has links)
Submitted by Vanessa Nunes (vnunes@unisinos.br) on 2015-03-26T18:01:21Z
No. of bitstreams: 1
LeticiaChemale.pdf: 14450426 bytes, checksum: 17412d5f1261ee67ef3a6f549e5bc364 (MD5) / Made available in DSpace on 2015-03-26T18:01:21Z (GMT). No. of bitstreams: 1
LeticiaChemale.pdf: 14450426 bytes, checksum: 17412d5f1261ee67ef3a6f549e5bc364 (MD5)
Previous issue date: 2013-07-31 / Nenhuma / O setor da saúde, no Brasil - vital para o desenvolvimento do país - tem se modificado drasticamente, nos últimos anos. Com as novas tecnologias, os novos conhecimentos médicos e o maior poder aquisitivo da população, a competição na área se tornou cada vez mais acirrada. O segmento de diagnóstico por imagem se caracteriza, em Porto Alegre/RS, pela concorrência entre clínicas particulares, muitas vezes, administradas por médicos ou por gestão familiar. A briga por uma fatia maior das clínicas particulares compete diretamente com grandes corporações hospitalares, o que contribui para a gestão profissional do setor, em que as empresas buscam a estratégia como uma ferramenta sustentável, para criar valor entre seus clientes. Esta dissertação tem por objetivo propor a elaboração de um modelo de gestão estratégica para a Clínica de Diagnóstico por Imagem Irradial. Inicialmente, foram realizadas quatro etapas, na empresa, por meio da pesquisa-ação. A primeira e segunda buscaram identificar e explicitar o cenário a que empresa estava inserida. Com este objetivo cumprido, através de seminários e de workshops com os diretores, foi possível esclarecer a visão e analisar a estratégia da empresa. A terceira etapa da pesquisa tinha por objetivo reconhecer as expectativas e as demandas dos stakeholders da empresa. Através do grupo de foco, com alguns médicos, clientes, funcionários e diretores, identificaram-se os fundamentos da cocriação de valor e formulou-se a estratégia da empresa, utilizando o Business Model. Em um segundo momento, ainda da terceira etapa, foi aplicado aos diretores um questionário semiestruturado, para a verificação da possibilidade de viabilizar a estratégia formulada,de acordo com os recursos reais da empresa. A quarta etapa visou definir um conjunto estruturado de objetivos e de indicadores interligados na relação causa-efeito e as perspectivas do balanced scorecard, que contemplassem a estratégia sugerida. Por meio de workshops e de seminários com os diretores, traduziu-se a estratégia definida, através de objetivos e de indicadores, nas perspectivas do balanced scorecard. A partir do resultado de cada etapa, utilizando-se um conjunto de ferramentas tradicionais e já conhecidas pelas empresas - que propiciam e privilegiam os processos de inovação de valor - propôs-se o mapa estratégico da Irradial, que configura a estratégia escolhida pela empresa e a desdobra a todos os envolvidos, sustentando uma vantagem competitiva que a empresa busca, para atuar, no mercado. / The health sector in Brazil - vital to the country’s development – has changed drastically over the past few years. Due to new technologies, new and improved medical knowledge and greater buying power of the population, completion has become increasingly fierce. The image diagnosis segment is characterized in Porto Alegre/RS by the competition among private clinics, many times managed by doctors or family enterprises. The struggle for a bigger market share by the private clinics competes directly with large hospital corporations, which contributes to professional management in the sector, and companies look for strategies as sustainable tools to create value among their clients. The objective of this dissertation is to propose the development of a strategic management model for theClínica de Diagnóstico por Imagem Irradial. Initially four steps were taken within the company through survey-action. The first and second steps attempted to identify and demonstrate the scenario in which the company was a part of. After fulfilling such goal, by having seminars and workshops with the directors, it was possible to clarify the vision and analyze the company's strategy. The third step of the survey had the intention to establish the expectations and demands of all company’s stakeholders. Through a focus group with some doctors, clients, employees and directors it was possible to identify the fundamental value co-creation and the company’s strategy was formulated, utilizing the Business Model. Later on, still during the third step, a semi-structured questionnaire was applied to the directors in order to check the possibility of making the formulated strategy viable based on the actual resources of company. The fourth step aimed to define a structured set of goals and indicators interconnected in the cause-effect relation, as well as the perspectives of the balanced scorecard, to contemplate the suggested strategy. Through workshops and seminars with the directors the defined strategy was translated based on goals and indicators in the perspective of the balanced scorecard. From the result in each step, by utilizing a set of traditional tools already known by the companies – which provide and privilege the innovative processes of value – Irradial’s strategic map was proposed, which sets the strategy chosen by the company and is applied to all involved, giving the competitive edge that the company seeks to act in the marketplace.
|
50 |
Vision-Based Emergency Landing of Small Unmanned Aircraft SystemsLusk, Parker Chase 01 November 2018 (has links)
Emergency landing is a critical safety mechanism for aerial vehicles. Commercial aircraft have triply-redundant systems that greatly increase the probability that the pilot will be able to land the aircraft at a designated airfield in the event of an emergency. In general aviation, the chances of always reaching a designated airfield are lower, but the successful pilot might use landmarks and other visual information to safely land in unprepared locations. For small unmanned aircraft systems (sUAS), triply- or even doubly-redundant systems are unlikely due to size, weight, and power constraints. Additionally, there is a growing demand for beyond visual line of sight (BVLOS) operations, where an sUAS operator would be unable to guide the vehicle safely to the ground. This thesis presents a machine vision-based approach to emergency landing for small unmanned aircraft systems. In the event of an emergency, the vehicle uses a pre-compiled database of potential landing sites to select the most accessible location to land based on vehicle health. Because it is impossible to know the current state of any ground environment, a camera is used for real-time visual feedback. Using the recently developed Recursive-RANSAC algorithm, an arbitrary number of moving ground obstacles can be visually detected and tracked. If obstacles are present in the selected ditch site, the emergency landing system chooses a new ditch site to mitigate risk. This system is called Safe2Ditch.
|
Page generated in 0.0417 seconds