• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 7
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 51
  • 25
  • 16
  • 12
  • 8
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Cosmologia usando aglomerados de galáxias no Dark Energy Survey / Cosmology with Galaxy Clusters in the Dark Energy Survey

Silva, Michel Aguena da 03 August 2017 (has links)
Aglomerados de galáxias são as maiores estruturas no Universo. Sua distribuição mapeia os halos de matéria escura formados nos potenciais profundos do campo de matéria escura. Consequentemente, a abundância de aglomerados é altamente sensível a expansão do Universo, assim como ao crescimento das perturbações de matéria escura, constituindo uma poderosa ferramenta para fins cosmológicos. Na era atual de grandes levantamentos observacionais que produzem uma quantidade gigantesca de dados, as propriedades estatísticas dos objetos observados (galáxias, aglomerados, supernovas, quasares, etc) podem ser usadas para extrair informações cosmológicas. Para isso, é necessária o estudo da formação de halos de matéria escura, da detecção dos halos e aglomerados, das ferramentas estatísticas usadas para o vínculos de parâmetros, e finalmente, dos efeitos da detecções ópticas. No contexto da formulação da predição teórica da contagem de halos, foi analisada a influência de cada parâmetro cosmológico na abundância dos halos, a importância do uso da covariância dos halos, e a eficácia da utilização dos halos para vincular cosmologia. Também foi analisado em detalhes os intervalos de redshift e o uso de conhecimento prévio dos parâmetros ({\\it priors}). A predição teórica foi testada um uma simulação de matéria escura, onde a cosmologia era conhecida e os halos de matéria escura já haviam sido detectados. Nessa análise, foi atestado que é possível obter bons vínculos cosmológicos para alguns parâmetros (Omega_m,w,sigma_8,n_s), enquanto outros parâmetros (h,Omega_b) necessitavam de conhecimento prévio de outros testes cosmológicos. Na seção dos métodos estatísticos, foram discutidos os conceitos de {\\it likelihood}, {\\it priors} e {\\it posterior distribution}. O formalismo da Matriz de Fisher, bem como sua aplicação em aglomerados de galáxias, foi apresentado e usado para a realização de predições dos vínculos em levantamentos atuais e futuros. Para a análise de dados, foram apresentados métodos de Cadeias de Markov de Monte Carlo (MCMC), que diferentemente da Matriz de Fisher não assumem Gaussianidade entre os parâmetros vinculados, porém possuem um custo computacional muito mais alto. Os efeitos observacionais também foram estudados em detalhes. Usando uma abordagem com a Matriz de Fisher, os efeitos de completeza e pureza foram extensivamente explorados. Como resultado, foi determinado em quais casos é vantajoso incluir uma modelagem adicional para que o limite mínimo de massa possa ser diminuído. Um dos principais resultados foi o fato que a inclusão dos efeitos de completeza e pureza na modelagem não degradam os vínculos de energia escura, se alguns outros efeitos já estão sendo incluídos. Também foi verificados que o uso de priors nos parâmetros não cosmológicos só afetam os vínculos de energia escura se forem melhores que 1\\%. O cluster finder(código para detecção de aglomerados) WaZp foi usado na simulação, produzindo um catálogo de aglomerados. Comparando-se esse catálogo com os halos de matéria escura da simulação, foi possível investigar e medir os efeitos observacionais. A partir dessas medidas, pôde-se incluir correções para a predição da abundância de aglomerados, que resultou em boa concordância com os aglomerados detectados. Os resultados a as ferramentas desenvolvidos ao longo desta tese podem fornecer um a estrutura para a análise de aglomerados com fins cosmológicos. Durante esse trabalho, diversos códigos foram desenvolvidos, dentre eles, estão um código eficiente para computar a predição teórica da abundância e covariância de halos de matéria escura, um código para estimar a abundância e covariância dos aglomerados de galáxias incluindo os efeitos observacionais, e um código para comparar diferentes catálogos de halos e aglomerados. Esse último foi integrado ao portal científico do Laboratório Interinstitucional de e-Astronomia (LIneA) e está sendo usado para avaliar a qualidade de catálogos de aglomerados produzidos pela colaboração do Dark Energy Survey (DES), assim como também será usado em levantamentos futuros. / Abstract Galaxy clusters are the largest bound structures of the Universe. Their distribution maps the dark matter halos formed in the deep potential wells of the dark matter field. As a result, the abundance of galaxy clusters is highly sensitive to the expansion of the universe as well as the growth of dark matter perturbations, representing a powerful tool for cosmological purposes. In the current era of large scale surveys with enormous volumes of data, the statistical quantities from the objects surveyed (galaxies, clusters, supernovae, quasars, etc) can be used to extract cosmological information. The main goal of this thesis is to explore the potential use of galaxy clusters for constraining cosmology. To that end, we study the halo formation theory, the detection of halos and clusters, the statistical tools required to quarry cosmological information from detected clusters and finally the effects of optical detection. In the composition of the theoretical prediction for the halo number counts, we analyze how each cosmological parameter of interest affects the halo abundance, the importance of the use of the halo covariance, and the effectiveness of halos on cosmological constraints. The redshift range and the use of prior knowledge of parameters are also investigated in detail. The theoretical prediction is tested on a dark matter simulation, where the cosmology is known and a dark matter halo catalog is available. In the analysis of the simulation we find that it is possible to obtain good constraints for some parameters such as (Omega_m,w,sigma_8,n_s) while other parameters (h,Omega_b) require external priors from different cosmological probes. In the statistical methods, we discuss the concept of likelihood, priors and the posterior distribution. The Fisher Matrix formalism and its application on galaxy clusters is presented, and used for making forecasts of ongoing and future surveys. For the real analysis of data we introduce Monte Carlo Markov Chain (MCMC) methods, which do not assume Gaussianity of the parameters distribution, but have a much higher computational cost relative to the Fisher Matrix. The observational effects are studied in detail. Using the Fisher Matrix approach, we carefully explore the effects of completeness and purity. We find in which cases it is worth to include extra parameters in order to lower the mass threshold. An interesting finding is the fact that including completeness and purity parameters along with cosmological parameters does not degrade dark energy constraints if other observational effects are already being considered. The use of priors on nuisance parameters does not seem to affect the dark energy constraints, unless these priors are better than 1\\%.The WaZp cluster finder was run on a cosmological simulation, producing a cluster catalog. Comparing the detected galaxy clusters to the dark matter halos, the observational effects were investigated and measured. Using these measurements, we were able to include corrections for the prediction of cluster counts, resulting in a good agreement with the detected cluster abundance. The results and tools developed in this thesis can provide a framework for the analysis of galaxy clusters for cosmological purposes. Several codes were created and tested along this work, among them are an efficient code to compute theoretical predictions of halo abundance and covariance, a code to estimate the abundance and covariance of galaxy clusters including multiple observational effects and a pipeline to match and compare halo/cluster catalogs. This pipeline has been integrated to the Science Portal of the Laboratório Interinstitucional de e-Astronomia (LIneA) and is being used to automatically assess the quality of cluster catalogs produced by the Dark Energy Survey (DES) collaboration and will be used in other future surveys.
42

Cosmologia usando aglomerados de galáxias no Dark Energy Survey / Cosmology with Galaxy Clusters in the Dark Energy Survey

Michel Aguena da Silva 03 August 2017 (has links)
Aglomerados de galáxias são as maiores estruturas no Universo. Sua distribuição mapeia os halos de matéria escura formados nos potenciais profundos do campo de matéria escura. Consequentemente, a abundância de aglomerados é altamente sensível a expansão do Universo, assim como ao crescimento das perturbações de matéria escura, constituindo uma poderosa ferramenta para fins cosmológicos. Na era atual de grandes levantamentos observacionais que produzem uma quantidade gigantesca de dados, as propriedades estatísticas dos objetos observados (galáxias, aglomerados, supernovas, quasares, etc) podem ser usadas para extrair informações cosmológicas. Para isso, é necessária o estudo da formação de halos de matéria escura, da detecção dos halos e aglomerados, das ferramentas estatísticas usadas para o vínculos de parâmetros, e finalmente, dos efeitos da detecções ópticas. No contexto da formulação da predição teórica da contagem de halos, foi analisada a influência de cada parâmetro cosmológico na abundância dos halos, a importância do uso da covariância dos halos, e a eficácia da utilização dos halos para vincular cosmologia. Também foi analisado em detalhes os intervalos de redshift e o uso de conhecimento prévio dos parâmetros ({\\it priors}). A predição teórica foi testada um uma simulação de matéria escura, onde a cosmologia era conhecida e os halos de matéria escura já haviam sido detectados. Nessa análise, foi atestado que é possível obter bons vínculos cosmológicos para alguns parâmetros (Omega_m,w,sigma_8,n_s), enquanto outros parâmetros (h,Omega_b) necessitavam de conhecimento prévio de outros testes cosmológicos. Na seção dos métodos estatísticos, foram discutidos os conceitos de {\\it likelihood}, {\\it priors} e {\\it posterior distribution}. O formalismo da Matriz de Fisher, bem como sua aplicação em aglomerados de galáxias, foi apresentado e usado para a realização de predições dos vínculos em levantamentos atuais e futuros. Para a análise de dados, foram apresentados métodos de Cadeias de Markov de Monte Carlo (MCMC), que diferentemente da Matriz de Fisher não assumem Gaussianidade entre os parâmetros vinculados, porém possuem um custo computacional muito mais alto. Os efeitos observacionais também foram estudados em detalhes. Usando uma abordagem com a Matriz de Fisher, os efeitos de completeza e pureza foram extensivamente explorados. Como resultado, foi determinado em quais casos é vantajoso incluir uma modelagem adicional para que o limite mínimo de massa possa ser diminuído. Um dos principais resultados foi o fato que a inclusão dos efeitos de completeza e pureza na modelagem não degradam os vínculos de energia escura, se alguns outros efeitos já estão sendo incluídos. Também foi verificados que o uso de priors nos parâmetros não cosmológicos só afetam os vínculos de energia escura se forem melhores que 1\\%. O cluster finder(código para detecção de aglomerados) WaZp foi usado na simulação, produzindo um catálogo de aglomerados. Comparando-se esse catálogo com os halos de matéria escura da simulação, foi possível investigar e medir os efeitos observacionais. A partir dessas medidas, pôde-se incluir correções para a predição da abundância de aglomerados, que resultou em boa concordância com os aglomerados detectados. Os resultados a as ferramentas desenvolvidos ao longo desta tese podem fornecer um a estrutura para a análise de aglomerados com fins cosmológicos. Durante esse trabalho, diversos códigos foram desenvolvidos, dentre eles, estão um código eficiente para computar a predição teórica da abundância e covariância de halos de matéria escura, um código para estimar a abundância e covariância dos aglomerados de galáxias incluindo os efeitos observacionais, e um código para comparar diferentes catálogos de halos e aglomerados. Esse último foi integrado ao portal científico do Laboratório Interinstitucional de e-Astronomia (LIneA) e está sendo usado para avaliar a qualidade de catálogos de aglomerados produzidos pela colaboração do Dark Energy Survey (DES), assim como também será usado em levantamentos futuros. / Abstract Galaxy clusters are the largest bound structures of the Universe. Their distribution maps the dark matter halos formed in the deep potential wells of the dark matter field. As a result, the abundance of galaxy clusters is highly sensitive to the expansion of the universe as well as the growth of dark matter perturbations, representing a powerful tool for cosmological purposes. In the current era of large scale surveys with enormous volumes of data, the statistical quantities from the objects surveyed (galaxies, clusters, supernovae, quasars, etc) can be used to extract cosmological information. The main goal of this thesis is to explore the potential use of galaxy clusters for constraining cosmology. To that end, we study the halo formation theory, the detection of halos and clusters, the statistical tools required to quarry cosmological information from detected clusters and finally the effects of optical detection. In the composition of the theoretical prediction for the halo number counts, we analyze how each cosmological parameter of interest affects the halo abundance, the importance of the use of the halo covariance, and the effectiveness of halos on cosmological constraints. The redshift range and the use of prior knowledge of parameters are also investigated in detail. The theoretical prediction is tested on a dark matter simulation, where the cosmology is known and a dark matter halo catalog is available. In the analysis of the simulation we find that it is possible to obtain good constraints for some parameters such as (Omega_m,w,sigma_8,n_s) while other parameters (h,Omega_b) require external priors from different cosmological probes. In the statistical methods, we discuss the concept of likelihood, priors and the posterior distribution. The Fisher Matrix formalism and its application on galaxy clusters is presented, and used for making forecasts of ongoing and future surveys. For the real analysis of data we introduce Monte Carlo Markov Chain (MCMC) methods, which do not assume Gaussianity of the parameters distribution, but have a much higher computational cost relative to the Fisher Matrix. The observational effects are studied in detail. Using the Fisher Matrix approach, we carefully explore the effects of completeness and purity. We find in which cases it is worth to include extra parameters in order to lower the mass threshold. An interesting finding is the fact that including completeness and purity parameters along with cosmological parameters does not degrade dark energy constraints if other observational effects are already being considered. The use of priors on nuisance parameters does not seem to affect the dark energy constraints, unless these priors are better than 1\\%.The WaZp cluster finder was run on a cosmological simulation, producing a cluster catalog. Comparing the detected galaxy clusters to the dark matter halos, the observational effects were investigated and measured. Using these measurements, we were able to include corrections for the prediction of cluster counts, resulting in a good agreement with the detected cluster abundance. The results and tools developed in this thesis can provide a framework for the analysis of galaxy clusters for cosmological purposes. Several codes were created and tested along this work, among them are an efficient code to compute theoretical predictions of halo abundance and covariance, a code to estimate the abundance and covariance of galaxy clusters including multiple observational effects and a pipeline to match and compare halo/cluster catalogs. This pipeline has been integrated to the Science Portal of the Laboratório Interinstitucional de e-Astronomia (LIneA) and is being used to automatically assess the quality of cluster catalogs produced by the Dark Energy Survey (DES) collaboration and will be used in other future surveys.
43

Multi-sources fusion based vehicle localization in urban environments under a loosely coupled probabilistic framework

Wei, Lijun 17 July 2013 (has links) (PDF)
In some dense urban environments (e.g., a street with tall buildings around), vehicle localization result provided by Global Positioning System (GPS) receiver might not be accurate or even unavailable due to signal reflection (multi-path) or poor satellite visibility. In order to improve the accuracy and robustness of assisted navigation systems so as to guarantee driving security and service continuity on road, a vehicle localization approach is presented in this thesis by taking use of the redundancy and complementarities of multiple sensors. At first, GPS localization method is complemented by onboard dead-reckoning (DR) method (inertial measurement unit, odometer, gyroscope), stereovision based visual odometry method, horizontal laser range finder (LRF) based scan alignment method, and a 2D GIS road network map based map-matching method to provide a coarse vehicle pose estimation. A sensor selection step is applied to validate the coherence of the observations from multiple sensors, only information provided by the validated sensors are combined under a loosely coupled probabilistic framework with an information filter. Then, if GPS receivers encounter long term outages, the accumulated localization error of DR-only method is proposed to be bounded by adding a GIS building map layer. Two onboard LRF systems (a horizontal LRF and a vertical LRF) are mounted on the roof of the vehicle and used to detect building facades in urban environment. The detected building facades are projected onto the 2D ground plane and associated with the GIS building map layer to correct the vehicle pose error, especially for the lateral error. The extracted facade landmarks from the vertical LRF scan are stored in a new GIS map layer. The proposed approach is tested and evaluated with real data sequences. Experimental results with real data show that fusion of the stereoscopic system and LRF can continue to localize the vehicle during GPS outages in short period and to correct the GPS positioning error such as GPS jumps; the road map can help to obtain an approximate estimation of the vehicle position by projecting the vehicle position on the corresponding road segment; and the integration of the building information can help to refine the initial pose estimation when GPS signals are lost for long time.
44

Simulation And Performance Evaluation Of A Fast And High Power Pulsed Laser Diode Driver For Laser Range Finder

Altinok, Yahya Kemal 01 June 2012 (has links) (PDF)
Laser Diodes (LDs) are semiconductor coherent lightening devices which are widely used in many fields such as defence, industry, medical and optical communications. They have advantageous characteristics such as having higher electrical-to-optical and optical-to-optical conversion efficiencies from pump source to useful output power when compared to flash lamps, which makes them the best devices to be used in range finding applications. Optical output power of lasers depends on current through LDs. Therefore, there is a relationship between operating life and work performance of LDs and performance of drive power supply. Even, weak drive current, small fluctuations of drive current can result in much greater fluctuations of optical output power and device parameters which will reduce reliability of LDs. In this thesis, a hardware for a fast and high power pulsed LD driver is designed for laser range finder and is based on linear current source topology. The driver is capable of providing pulses up to 120A with 250&mu / s pulse width and frequencies ranging from 20Hz to 40Hz. It provides current pulses for two LD arrays controlled with a proportional-integral (PI) controller and protect LDs against overcurrents and overvoltages. The proposed current control in the thesis reduces current regulation to less than 1% and diminishes overshoots and undershoots to a value less than 1% of steady-state value, which improves safe operation of LDs. Moreover, protection functions proposed in the thesis are able to detect any failure in driver and interrupt LD firing immediately, which guarantees safe operation of LDs.
45

[en] MOBILE ROBOT SIMULTANEOUS LOCALIZATION AND MAPPING USING DP-SLAM WITH A SINGLE LASER RANGE FINDER / [pt] MAPEAMENTO E LOCALIZAÇÃO SIMULTÂNEA DE ROBÔS MÓVEIS USANDO DP-SLAM E UM ÚNICO MEDIDOR LASER POR VARREDURA

LUIS ERNESTO YNOQUIO HERRERA 31 July 2018 (has links)
[pt] SLAM (Mapeamento e Localização Simultânea) é uma das áreas mais pesquisadas na Robótica móvel. Trata-se do problema, num robô móvel, de construir um mapa sem conhecimento prévio do ambiente e ao mesmo tempo manter a sua localização nele. Embora a tecnologia ofereça sensores cada vez mais precisos, pequenos erros na medição são acumulados comprometendo a precisão na localização, sendo estes evidentes quando o robô retorna a uma posição inicial depois de percorrer um longo caminho. Assim, para melhoria do desempenho do SLAM é necessário representar a sua formulação usando teoria das probabilidades. O SLAM com Filtro Extendido de Kalman (EKF-SLAM) é uma solução básica, e apesar de suas limitações é a técnica mais popular. O Fast SLAM, por outro lado, resolve algumas limitações do EKF-SLAM usando uma instância do filtro de partículas conhecida como Rao-Blackwellized. Outra solução bem sucedida é o DP-SLAM, o qual usa uma representação do mapa em forma de grade de ocupação, com um algoritmo hierárquico que constrói mapas 2D bastante precisos. Todos estes algoritmos usam informação de dois tipos de sensores: odômetros e sensores de distância. O Laser Range Finder (LRF) é um medidor laser de distância por varredura, e pela sua precisão é bastante usado na correção do erro em odômetros. Este trabalho apresenta uma detalhada implementação destas três soluções para o SLAM, focalizado em ambientes fechados e estruturados. Apresenta-se a construção de mapas 2D e 3D em terrenos planos tais como em aplicações típicas de ambientes fechados. A representação dos mapas 2D é feita na forma de grade de ocupação. Por outro lado, a representação dos mapas 3D é feita na forma de nuvem de pontos ao invés de grade, para reduzir o custo computacional. É considerado um robô móvel equipado com apenas um LRF, sem nenhuma informação de odometria. O alinhamento entre varreduras laser é otimizado fazendo o uso de Algoritmos Genéticos. Assim, podem-se construir mapas e ao mesmo tempo localizar o robô sem necessidade de odômetros ou outros sensores. Um simulador em Matlab é implementado para a geração de varreduras virtuais de um LRF em um ambiente 3D (virtual). A metodologia proposta é validada com os dados simulados, assim como com dados experimentais obtidos da literatura, demonstrando a possibilidade de construção de mapas 3D com apenas um sensor LRF. / [en] Simultaneous Localization and Mapping (SLAM) is one of the most widely researched areas of Robotics. It addresses the mobile robot problem of generating a map without prior knowledge of the environment, while keeping track of its position. Although technology offers increasingly accurate position sensors, even small measurement errors can accumulate and compromise the localization accuracy. This becomes evident when programming a robot to return to its original position after traveling a long distance, based only on its sensor readings. Thus, to improve SLAM s performance it is necessary to represent its formulation using probability theory. The Extended Kalman Filter SLAM (EKF-SLAM) is a basic solution and, despite its shortcomings, it is by far the most popular technique. Fast SLAM, on the other hand, solves some limitations of the EKFSLAM using an instance of the Rao-Blackwellized particle filter. Another successful solution is to use the DP-SLAM approach, which uses a grid representation and a hierarchical algorithm to build accurate 2D maps. All SLAM solutions require two types of sensor information: odometry and range measurement. Laser Range Finders (LRF) are popular range measurement sensors and, because of their accuracy, are well suited for odometry error correction. Furthermore, the odometer may even be eliminated from the system if multiple consecutive LRF scans are matched. This works presents a detailed implementation of these three SLAM solutions, focused on structured indoor environments. The implementation is able to map 2D environments, as well as 3D environments with planar terrain, such as in a typical indoor application. The 2D application is able to automatically generate a stochastic grid map. On the other hand, the 3D problem uses a point cloud representation of the map, instead of a 3D grid, to reduce the SLAM computational effort. The considered mobile robot only uses a single LRF, without any odometry information. A Genetic Algorithm is presented to optimize the matching of LRF scans taken at different instants. Such matching is able not only to map the environment but also localize the robot, without the need for odometers or other sensors. A simulation program is implemented in Matlab to generate virtual LRF readings of a mobile robot in a 3D environment. Both simulated readings and experimental data from the literature are independently used to validate the proposed methodology, automatically generating 3D maps using just a single LRF.
46

Radio frequency spectrum monitoring: Officers' acceptance of monitoring technologies such as fixed direction finders

Phoshoko, Silas M. January 2006 (has links)
Magister Commercii - MCom / The research focuses on the acceptance of new technologies within the telecommunications industry. The study examines three models namely Innovation theory, Theory of Reason Action (TRA), and Technology Acceptance Model (TAM). This study explores the technology acceptance models in order to explain why certain monitoring officers at ICASA would prefer specific technologies over others. Models of interest could be the innovation theory, TRA and TAM. After reviewing both models, the author will examine the TAM in detail as a model of interest in this study. In turn, this model is expected to assist us to understand why monitoring officer's at ICASA would prefer a particular frequency monitoring technology over the other. / South Africa
47

Řízení invalidního vozíku / Control of a wheelchair

Vožda, Ondřej January 2013 (has links)
This thesis describes development of control algorithm for a wheelchair. Wheelchair should be capable of tracking and following a wall or a similar flat surface. Thesis is supposed to be an extension of the previous concept, whose purpose was to allow remote telepresence control of this wheelchair. SRF08 ultrasonic range finders are used to measure distance from the wall. Furthermore, image processing for mark detection is discussed. Purpose of these marks is to increase precision during final phase of the parking.
48

Senzorika a řízení pohonů 4 kolového mobilního robotu / Sensors and motor control of mobile robot

Zatloukal, Jiří January 2013 (has links)
The diploma thesis is dealing with the proposal and realization of the sensor and drive system of the four wheel mobile robot. The control unit is a miniature computer Raspberry Pi. The robot will be employed in the future for the environment mapping and location. For this purpose robot exploits the different types of sensors. The information of these sensors is being processed by the Xmega microcontroller. Another microcontroller together with H-bridge DRV-8432 is used to control the direct current drives.
49

An Exploratory Study Of The Strengths Of Islamic School Principals In California, Texas, New York, Florida, And Illinois

Qadri, Syed Kamran 01 January 2014 (has links)
As the focal point of the school, the principal’s leadership is integral to its effective functioning. This study used a self-assessment to analyze the self-identified strengths of principals in Islamic schools within the five most populated states in the United States (which also have the largest number of mosques) and the commonalities in those strengths based on (a) the enrollment of the school; (b) year school was established; (c) the gender of the principal; (d) the principal’s professional preparation, e.g., degree in education vs. other fields and years of experience; and (e) geographic location. While only a small amount of statistical significance was evident (p < .05) in exploring the differences between groups, several conclusions were made. In analyzing the strengths of the principals, the least selected strength was Significance and the most was Analytical, which had the highest proportion of affirmatively responding principals as compared to any of the other strengths. Additionally, the relationship between principal strength and school enrollment resulted in for the strengths of Command and Developer at a significance level that was less stringent than the p = .002 dictated by the study; principals at schools that have a student enrollment of 151-200 ranked Command higher as compared to principals in schools of other sizes, whereas those with an enrollment of 150 or fewer students ranked Developer as a more preferred strength. In addressing principal strengths and gender, the results showcased males ranking Self-assuredness as their preferred trait more frequently than their female counterparts, who preferred Futuristic. Furthermore, the relationship of principal strengths and area of education resulted in the strengths of Activator, Maximizer, and Positivity as being iv ranked higher for principals who had a degree in education at the p = .05 level. The strengths of Empathy, Harmony, and Responsibility (p < .05) and Deliberative (p < .01) were ranked higher by principals who did not have a degree in education. Also, based on the average rankings of principal strengths, Achiever indicated the strongest association for principals with a degree in education and Deliberative for principals who did not. The results of the mean ranking of the strengths among principals of differing years of experience resulted in the ranking of Focus and Includer at higher levels for principals with 3-6 years of experience (p < .01). Furthermore, the average rankings showcased the strength of Achiever as the most strongly rated for principals with less than 3 years of experience, Focus for principals with 3-6 years of experience, and Analytical for principals with more than 6 years. Examination of principal strengths based on geographic location was conducted descriptively due to small group sizes. Among the five states of focus, average rankings of strengths indicated that Deliberative was the most preferred among California principals, Includer among Florida principals, Activator among Illinois respondents, Command among New York principals, and Analytical in Texas.
50

Sensordatenfusion zur robusten Bewegungsschätzung eines autonomen Flugroboters

Wunschel, Daniel 15 March 2012 (has links) (PDF)
Eine Voraussetzung um einen Flugregler für Flugroboter zu realisieren, ist die Wahrnehmung der Bewegungen dieses Roboters. Diese Arbeit beschreibt einen Ansatz zur Schätzung der Bewegung eines autonomen Flugroboters unter Verwendung relativ einfacher, leichter und kostengünstiger Sensoren. Mittels eines Erweiterten Kalman Filters werden Beschleunigungssensoren, Gyroskope, ein Ultraschallsensor, sowie ein Sensor zu Messung des optischen Flusses zu einer robusten Bewegungsschätzung kombiniert. Dabei wurden die einzelnen Sensoren hinsichtlich der Eigenschaften experimentell untersucht, welche für die anschließende Erstellung des Filters relevant sind. Am Ende werden die Resultate des Filters mit den Ergebnissen einer Simulation und eines externen Tracking-Systems verglichen.

Page generated in 0.0563 seconds