• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 253
  • 51
  • 34
  • 27
  • 27
  • 8
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 503
  • 503
  • 115
  • 79
  • 76
  • 68
  • 68
  • 57
  • 47
  • 44
  • 36
  • 36
  • 36
  • 35
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Wavelet-based Data Reduction and Mining for Multiple Functional Data

Jung, Uk 12 July 2004 (has links)
Advance technology such as various types of automatic data acquisitions, management, and networking systems has created a tremendous capability for managers to access valuable production information to improve their operation quality and efficiency. Signal processing and data mining techniques are more popular than ever in many fields including intelligent manufacturing. As data sets increase in size, their exploration, manipulation, and analysis become more complicated and resource consuming. Timely synthesized information such as functional data is needed for product design, process trouble-shooting, quality/efficiency improvement and resource allocation decisions. A major obstacle in those intelligent manufacturing system is that tools for processing a large volume of information coming from numerous stages on manufacturing operations are not available. Thus, the underlying theme of this thesis is to reduce the size of data in a mathematical rigorous framework, and apply existing or new procedures to the reduced-size data for various decision-making purposes. This thesis, first, proposes {it Wavelet-based Random-effect Model} which can generate multiple functional data signals which have wide fluctuations(between-signal variations) in the time domain. The random-effect wavelet atom position in the model has {it locally focused impact} which can be distinguished from other traditional random-effect models in biological field. For the data-size reduction, in order to deal with heterogeneously selected wavelet coefficients for different single curves, this thesis introduces the newly-defined {it Wavelet Vertical Energy} metric of multiple curves and utilizes it for the efficient data reduction method. The newly proposed method in this thesis will select important positions for the whole set of multiple curves by comparison between every vertical energy metrics and a threshold ({it Vertical Energy Threshold; VET}) which will be optimally decided based on an objective function. The objective function balances the reconstruction error against a data reduction ratio. Based on class membership information of each signal obtained, this thesis proposes the {it Vertical Group-Wise Threshold} method to increase the discriminative capability of the reduced-size data so that the reduced data set retains salient differences between classes as much as possible. A real-life example (Tonnage data) shows our proposed method is promising.
262

Security Schemes for Wireless Sensor Networks with Mobile Sink

Rasheed, Amar Adnan 2010 May 1900 (has links)
Mobile sinks are vital in many wireless sensor applications for efficient data collection, data querying, and localized sensor reprogramming. Mobile sinks prolong the lifetime of a sensor network. However, when sensor networks with mobile sinks are deployed in a hostile environment, security became a critical issue. They become exposed to varieties of malicious attacks. Thus, anti threats schemes and security services, such as mobile sink?s authentication and pairwise key establishment, are essential components for the secure operation of such networks. Due to the sensors, limited resources designing efficient security schemes with low communication overhead to secure communication links between sensors and MS (Mobile Sink) is not a trivial task. In addition to the sensors limited resources, sink mobility required frequent exchange of cryptography information between the sensors and MS each time the MS updates its location which imposes extra communication overhead on the sensors. In this dissertation, we consider a number of security schemes for WSN (wireless sensor network) with MS. The schemes offer high network?s resiliency and low communication overhead against nodes capture, MS replication and wormhole attacks. We propose two schemes based on the polynomial pool scheme for tolerating nodes capture: the probabilistic generation key pre-distribution scheme combined with polynomial pool scheme, and the Q-composite generation key scheme combined with polynomial pool scheme. The schemes ensure low communication overhead and high resiliency. For anti MS replication attack scheme, we propose the multiple polynomial pools scheme that provide much higher resiliency to MS replication attack as compared to the single polynomial pool approach. Furthermore, to improve the network resiliency against wormhole attack, two defensive mechanisms were developed according to the MS mobility type. In the first technique, MS uses controlled mobility. We investigate the problem of using a single authentication code by sensors network to verify the source of MS beacons, and then we develop a defensive approach that divide the sensor network into different authentication code?s grids. In the second technique, random mobility is used by MS. We explore the use of different communication channels available in the sensor hardware combined with polynomial pool scheme.
263

Employee selection : Mechanisms behind practitioners’ preference for hiring practices

Langhammer, Kristina January 2013 (has links)
Despite the great advances science has made in developing selection decision aids practitioners’ generally remain reluctant to adopt them. This phenomenon is considered today one of the greatest gaps in industrial, work and organizational psychology. This thesis adopts a psychological approach to practitioners’ resistance toward hiring procedures with high predictive validity of work performance. Consequently, three specific research questions were examined, two of which highlighted aspects of self-regulation, and one focused on agency relation in order to study outcomes in terms of actual use of hiring procedures and intention to change hiring procedures. The present thesis comprises three studies. Questionnaire data is used in two studies (Study I and II) to study how 1) prototype beliefs and ability to evaluate the quality of own performance is related to use of selection decision methods; and also how 2) behavioral intention to change hiring practice is related to self-efficacy beliefs, causal attribution and past behavior. Data collected with semi-structured interviews is used in Study III in order to study practitioners’ experiences in collaborative contexts in employee selection. Study I found that prototype beliefs and task quality ambiguity perceptions varied across various hiring practices. The results from Study II showed that self-efficacy beliefs, external attributions of success and internal attributions of failure were related to intention to change hiring practices. Study III highlighted the prevalence of separate self-interests over more general organizational interests in the agentic relation between practitioners. In conclusion, the present thesis has implication for theory as well as practice when it concludes that conscious steered cognitive mechanisms are important for understanding practitioners’ resistance towards high standardized hiring practices. / <p>At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 1: Manuscript. Paper 3: Manuscript.</p>
264

Experimentelle Untersuchung der Elektronendichte von RuAl2 - Optimierung der Datensammlung für Beugungsexperimente an Einkristallen

Wedel, Michael 02 December 2014 (has links) (PDF)
Ziel der vorliegenden Dissertation war, die Elektronendichte von RuAl2 anhand von Röntgenbeugungsdaten mit Hilfe des Multipolmodells [1] zu rekonstruieren, um Erkenntnisse zur chemischen Bindung in dieser Substanz zu erlangen. Im Gegensatz zu organischen Molekülen machen diese bei einer intermetallischen Verbindung wie RuAl2 nur einen kleinen Anteil an der Gesamtelektronenzahl aus, wodurch die Methode an ihre Grenzen stößt. RuAl2 kristallisiert im TiSi2 -Strukturtyp [2], die Kristallstruktur kann als Abfolge gegeinander verschobener pseudohexagonaler Schichten verstanden werden. Der erste Schritt auf dem Weg zu einer erfolgreichen Dichterekonstruktion besteht in der Synthese eines geeigneten Kristalls. Die hohen Synthesetemperaturen von über 1500 °C, die bei der Arbeit mit Schmelzen im System Ruthenium – Aluminium benötigt werden, wurden durch den Einsatz von Zinn als Lösungsmittel umgangen. Auf diese Weise konnten bei Temperaturen unter 1000 °C Kristalle gezüchtet werden. Die Analyse mittels elektronenmikroskopischer Methoden zeigte, dass auf diese Weise Kristalle erhalten werden können, die frei von Verunreinigungen sind. Erste Röntgenbeugungsexperimente zeigten auch, dass die Kristalle gut geordnet sind und bis zu hoher Auflösung Daten liefern. Um unerwünschte Effekte wie Extinktion und thermische Bewegung der Atome, sowie thermisch diffuse Streuung zu minimieren wurden die eigentlichen Diffraktionsversuche an einem sehr kleinen Kristall (Durchmesser 15 μm) mit kurzwelliger Synchrotronstrahlung (λ = 0,41328 Å) bei 25 K durchgeführt. Es konnte ein Datensatz von sehr hoher Qualität gesammelt werden, der zur Verfeinerung des Strukturmodells genutzt wurde. Dabei trat eine sehr geringe Stapelfehlordnung zu Tage, die auf die enge Verwandtschaft des MoSi2 -Strukturtyps mit dem TiSi2 -Typ, zurückzuführen ist. Trotz des sehr geringen Fehlordnungsanteils (0,3 %) wurde die Strukturverfeinerung anhand des Multipolmodells durchgeführt. Die aus dem Modell rekonstruierte Elektronendichte wurde bezüglich ihrer Topologie untersucht, wobei innerhalb der pseudohexagonalen Schichten sowohl Ru – Al als auch Al – Al Wechselwirkungen nachgewiesen wurden, während zwischen den Schichten ausschließlich Ru – Al-Bindungen zu finden sind. Um die entsprechende Datensammlung weiter zu verbessern wurde parallel zu den Experimenten ein Computerprogramm entwickelt, das die Datensammlungsstrategie des Beugungsexperimentes optimieren soll. Bei der Strategiesuche handelt es sich um eine Variante des Problems des Handlungsreisenden (Travelling Salesman) und stellt somit bereits bei einer moderaten Reflexzahl eine enorme kombinatorische Aufgabe dar [3, 4]. Um möglichst gute Näherungslösungen zu finden, benutzt das Programm den Simulated Annealing-Algorithmus [5]. Dieser generiert Lösungsvorschläge, indem er die Parameter der Messung zufällig variiert und den resultierenden Datensatz simuliert. Der Algorithmus beurteilt die Güte einer Lösung anhand einer Kostenfunktion. Im Falle der Strategieoptimierung wird der Wert dieser Funktion aus bestimmten Indikatoren für Datenqualität berechnet. Da in diesem Stadium des Experiments in der Regel noch kein Strukturmodell besteht, kann in der Simulation nicht auf Intensitätsinformation zurückgegriffen werden. Deshalb muss auf Qualitätsindikatoren zurückgegriffen werden, die nicht auf Intensitätsinformation basieren. Vollständigkeit und Redundanz sind an dieser Stelle besonders wichtig und können mit Hilfe der verfügbaren Informationen berechnet werden. Allerdings können sich sowohl Simulation als auch die Berechnung der Kostenfunktion bei verschiedenen Experimenten gravierend voneinander unterscheiden. Diese Tatsache floss bereits ganz am Anfang als eine der Hauptanforderungen in den Entwurf des Programms ein. Sämtliche Berechnungen und Teilalgorithmen werden als Plugins in die Software eingebunden, wodurch diese beliebig erweiterbar ist. Die unterschiedlichen Berechnungen können vom Benutzer auf vielfältige Art und Weise miteinander kombiniert werden. Zur Überprüfung der Tauglichkeit für die Strategieoptimierung wurde zunächst angestrebt bereits gemessene Datensätze anhand einer Simulation auf Basis der Ewaldkonstruktion zu reproduzieren. Nachdem dies gelang, wurde eine Messstrategie für ein konkretes strukturchemisches Problem erarbeitet. Es handelte sich um die Aufklärung einer sehr kleinen Fehlordnung in CeIrIn5 [6], deren Nachweis mit Hilfe der verbesserten Beugungsdaten untermauert werden konnte. [1] N. K. Hansen und P. Coppens. „Testing aspherical atom refinements on small-molecule data sets“. Acta Crystallogr. A 34 (1978), S. 909–921. [2] L.-E. Edshammar. „An X-Ray Investigation of Ruthenium-Aluminium Alloys“. Acta Chem. Scand. 20 (1966), S. 427–431. [3] R. G. Bland und D. F. Shallcross. Large Travling Salesman Problems Arising From Experiments In X-Ray Crystallography: A Preliminary Report On Computation. Ithaca, New York: Cornell University, 1987. [4] Z. Dauter. „Data-collection strategies“. Acta Crystallogr. D 55 (1999), S. 1703–1717. [5] S. Kirkpatrick, C. D. Gelatt und M. P. Vecchi. „Optimization by Simulated Annealing“. Science 220 (1983), S. 671–680. [6] S. Wirth u. a. „Structural investigations of CeIrIn5 and CeCoIn5 on macroscopic and atomic length scales“. J. Phys. Soc. Jpn. 83(6):061009, 2014.
265

A computational framework for unsupervised analysis of everyday human activities

Hamid, Muhammad Raffay. January 2008 (has links)
Thesis (Ph.D.)--Computing, Georgia Institute of Technology, 2009. / Committee Chair: Aaron Bobick; Committee Member: Charles Isbell; Committee Member: David Hogg; Committee Member: Irfan Essa; Committee Member: James Rehg
266

Cross-layer design applied to small satellites for data collection / Conception cross-layer d’une architecture de collecte de données pour petits satellites à défilement

Almonacid Zamora, Vicente 28 November 2017 (has links)
Avec l'introduction des plate-formes CubeSat, le nombre de petits satellites lancés dans l'espace a grandi de manière importante pendant les deux dernières décennies.Étant développés initialement par des universités et des centres de recherche pour des simples tests technologiques ou des expériences académiques, ces plate-formes aujourd'hui permettent d'envisager de nouvelles applications et services.Dans cette thèse, nous nous intéressons à l'usage de petits satellites à défilement pour des réseaux globaux de collecte de données et, plus généralement, pour des applications de type machine-to-machine (M2M).En raison des contraintes existantes tant au segment sol comme au segment spatial, la capacité du canal de transmission est fortement limitée---notamment celle du lien montant, qui correspond à un canal à accès multiple.Ces réseaux sont aussi caractérisés par des très petits messages arrivant au système de manière imprévisible, ce qui implique que toute redondance liée au protocole a un impact important sur l’efficacité spectrale. Ainsi, des méthodes d'accès aléatoires sont souvent préférés pour le lien montant.Relever ces défis nécessite d'aborder l'optimisation de la transmission de manière holistique. Plus spécifiquement, la conception des couches physiques (PHY) et de contrôle d'accès au support (MAC, de l'anglais Media Access Control) doit être menée de manière conjointe.Les principales contributions de cette thèse portent sur l'étude du protocole Time-- and Frequency--Asynchronous ALOHA (TFAA), une technique d'accès aléatoire utilisée dans des réseaux terrestres à modulation de bande étroite. En réduisant significativement le débit binaire de transmission, TFAA permet notamment d'établir des liaisons à longue portée et/ou à faible consommation énergétique, dont des systèmes M2M par satellite sont un exemple.D'abord, nous évaluons les performances au niveau MAC (i.e., le taux d'utilisation de canal et la probabilité d'erreur de packet) sous trois différents modèles de réception: le modèle de collisions, le modèle de capture et un modèle plus détaillé qui prend en compte les paramètres de la couche PHY.À partir de ce dernier modèle, nous étudions ensuite l'impact de certains paramètres de la couche PHY sur les performances au niveau MAC.Afin d'améliorer la performance de TFAA, nous proposons Contention Resolution Time-- and Frequency--Asynchronous ALOHA (CR-TFAA), une solution plus sophistiquée intégrant des techniques de suppressions successives d'interférences.Enfin, nous étudions les bénéfices obtenus en exploitant le compromis <<performance--délai de bout-en-bout>> en utilisant des techniques simples telles qu'un système de contrôle de transmission et le codage au niveau packet. / With the introduction of the CubeSat standard, the number of small-satellite missions has increased dramatically over the last two decades.Initially developed by universities and research centres for technology validation and academic experiments, these low-cost platforms currently allow to perform a variety of advanced, novel applications.In this thesis we are interested in the use of small satellites for global data collection and, more generally, for Internet of Things (IoT) and machine-to-machine (M2M) applications.Since both the space and ground segments are subject to stringent constraints in terms of size and mass, the overall capacity of the communications channel is highly limited, specially that of the uplink, which is a multi-access channel.These systems are also characterised by bursty, short messages, meaning that any protocol overhead may have a significant impact on the bandwidth efficiency. Hence, a random access approach is usually adopted for the uplink.Facing these challenges requires to optimize the communication system by taking an holistic approach. In particular, a joint design of both the physical (PHY) and Medium Access Control (MAC) layers is needed.The main contributions of this thesis are related to the study of Time-- and Frequency--Asynchronous ALOHA (TFAA), a random access approach adopted in terrestrial ultra narrowband (UNB) networks. By trading data rate for communication range or transmission power, TFAA is particularly attractive in power constrained applications such as low power wide area networks and M2M over satellite. First, we evaluate its MAC performance (i.e., its throughput and packet error rate) under three different reception models: the collision channel, the capture channel and a more detailed model that takes into account the PHY layer design.Then, we study the impact of PHY layer parameters, such as forward error correction (FEC), pulse shaping filter and modulation order, on the MAC performance.We show that, due to the characteristics of the multiple access interference, significant improvements can be obtained by applying low-rate FEC.To further improve TFAA's performance, we propose Contention Resolution Time-- and Frequency--Asynchronous ALOHA (CR-TFAA), a more advanced design which is in line with recent developments such as Asynchronous Contention Resolution Diversity ALOHA (ACRDA).Under the same set of hypothesis, we see that CR-TFAA provides similar and even better performance than ACRDA, with a decrease in the packet error rate of at least one order of magnitude.Finally, we study the benefits that can be obtained by trading delay for MAC performance and energy efficiency, using simple techniques such as transmission control and packet-layer erasure coding.
267

Discovering and Mitigating Social Data Bias

January 2017 (has links)
abstract: Exabytes of data are created online every day. This deluge of data is no more apparent than it is on social media. Naturally, finding ways to leverage this unprecedented source of human information is an active area of research. Social media platforms have become laboratories for conducting experiments about people at scales thought unimaginable only a few years ago. Researchers and practitioners use social media to extract actionable patterns such as where aid should be distributed in a crisis. However, the validity of these patterns relies on having a representative dataset. As this dissertation shows, the data collected from social media is seldom representative of the activity of the site itself, and less so of human activity. This means that the results of many studies are limited by the quality of data they collect. The finding that social media data is biased inspires the main challenge addressed by this thesis. I introduce three sets of methodologies to correct for bias. First, I design methods to deal with data collection bias. I offer a methodology which can find bias within a social media dataset. This methodology works by comparing the collected data with other sources to find bias in a stream. The dissertation also outlines a data collection strategy which minimizes the amount of bias that will appear in a given dataset. It introduces a crawling strategy which mitigates the amount of bias in the resulting dataset. Second, I introduce a methodology to identify bots and shills within a social media dataset. This directly addresses the concern that the users of a social media site are not representative. Applying these methodologies allows the population under study on a social media site to better match that of the real world. Finally, the dissertation discusses perceptual biases, explains how they affect analysis, and introduces computational approaches to mitigate them. The results of the dissertation allow for the discovery and removal of different levels of bias within a social media dataset. This has important implications for social media mining, namely that the behavioral patterns and insights extracted from social media will be more representative of the populations under study. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2017
268

Desenvolvimento de um sistema de aquisição de dados usando plataforma aberta / Development of data acquisition system using open platform

Predolin, Rodrigo Eduardo [UNESP] 14 August 2017 (has links)
Submitted by RODRIGO EDUARDO PREDOLIN null (rbi.predolin@gmail.com) on 2017-09-14T14:18:56Z No. of bitstreams: 1 defesa mestrado trabalho completo v3r.pdf: 25746886 bytes, checksum: 35a0b098c5eddb9a63a0d1f9107fe4e2 (MD5) / Approved for entry into archive by Luiz Galeffi (luizgaleffi@gmail.com) on 2017-09-15T13:51:52Z (GMT) No. of bitstreams: 1 predolin_re_me_bauru.pdf: 25746886 bytes, checksum: 35a0b098c5eddb9a63a0d1f9107fe4e2 (MD5) / Made available in DSpace on 2017-09-15T13:51:52Z (GMT). No. of bitstreams: 1 predolin_re_me_bauru.pdf: 25746886 bytes, checksum: 35a0b098c5eddb9a63a0d1f9107fe4e2 (MD5) Previous issue date: 2017-08-14 / Sistemas para monitoramento de parâmetros são amplamente utilizados no setor industrial para controle de processos e também na área de pesquisa e desenvolvimento. Estes sistemas permitem uma análise detalhada do comportamento de equipamentos e dispositivos e fornecem informações que auxiliam na melhora do seu desempenho. No estudo da utilização de energias renováveis o uso de sensores é aplicado para mapear o ambiente onde o equipamento está inserido e analisar o seu comportamento e desempenho. No caso de coletores solares, é realizado o monitoramento da temperatura do ambiente, da água no coletor em locais diferentes, da velocidade do vento e da radiação solar, permitindo o seu controle e melhorando o seu desempenho. A instrumentação adequada pode trazer melhorias aos coletores solares, porém dependem de estudos detalhados do seu comportamento através da aquisição de dados do equipamento e do ambiente onde ele está inserido. Para viabilizar esta otimização são necessários equipamentos de coleta de dados específicos que, normalmente, tem alto custo de aquisição. Este é um dos principais empecilhos para uma maior evolução destes equipamentos, principalmente os destinados ao uso residencial. Sendo assim, este trabalho objetiva desenvolver um módulo de coleta de dados de baixo custo para auxíliar no estudo de equipamentos em diversas áreas, incluindo a área de fontes de energia renovável. Um dispositivo deste tipo possibilita a coleta de diversos dados físicos como, por exemplo, a temperatura, a velocidade do vento e vazão d’água. Com o dispositivo há a possibilidade do armazenamento desses dados em um cartão SD, facilitando a sua transferência para o computador. Ao final do trabalho é apresentado o projeto de um módulo e as bibliotecas de maneira a permitir a sua fácil utilização. Alguns valores de parâmetros também foram captados, verificando-se o comportamento adequado do módulo e biblioteca nas condições propostas. / Parameter monitoring systems are widely used in the industrial sector for process control and also in the area of research and development. These systems afford device's behavior analysis with detailed and provide information that helps improve its performance. In the study of renewable energies use, the sensors is applied to map the environment where the equipment is inserted and to analyze its behavior and performance. In the case of solar collectors, the monitoring occurs in the environment temperature, collector water temperature in different places, wind speed and the solar radiation, allowing its control and improving its performance. Proper instrumentation can bring improvements to solar collectors, but depend on detail studies of its behavior based on the acquisition of data from the equipment and the environment where it is inserted. To make this optimization feasible, specific data collection equipment is required, which normally has a high acquisition cost. This is one of the main impediments to keep the evolution of this equipment, especially those destined for residential use. Therefore, this work aims to develop a low cost data collection module to aid in the initial study of equipments used in several sector, include the renewable energy sources sector. This kind of device turn possible the collection of various physical data, such as temperature, wind speed and water flow. With this device there is the possibility of storing this data on an SD card, making it easy to transfer to the computer. At the end of the work, the design of the module and its libraries are presented in a way that allows easy use. Some parameter values were also captured, verifying the proper behavior of the module and library in the proposed conditions.
269

Modelagem em SystemC-AMS de uma plataforma compat?vel com o sistema de coleta de dados brasileiro

Costa, Haulisson Jody Batista da 03 September 2009 (has links)
Made available in DSpace on 2014-12-17T14:55:39Z (GMT). No. of bitstreams: 1 HaulissonJBC.pdf: 4077011 bytes, checksum: fcba1ed8fcdc3b273e8994b6775327be (MD5) Previous issue date: 2009-09-03 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / This work presents simulation results of an identification platform compatible with the INPE Brazilian Data Collection System, modeled with SystemC-AMS. SystemC-AMS that is a library of C++ classes dedicated to the simulation of heterogeneous systems, offering a powerful resource to describe models in digital, analog and RF domains, as well as mechanical and optic. The designed model was divided in four parts. The first block takes into account the satellite s orbit, necessary to correctly model the propagation channel, including Doppler effect, attenuation and thermal noise. The identification block detects the satellite presence. It is composed by low noise amplifier, band pass filter, power detector and logic comparator. The controller block is responsible for enabling the RF transmitter when the presence of the satellite is detected. The controller was modeled as a Petri net, due to the asynchronous nature of the system. The fourth block is the RF transmitter unit, which performs the modulation of the information in BPSK ?60o. This block is composed by oscillator, mixer, adder and amplifier. The whole system was simulated simultaneously. The results are being used to specify system components and to elaborate testbenchs for design verification / Este trabalho apresenta resultados de simula??o de uma plataforma de identifica??o compat?vel com o Sistema de Coleta de Dados Brasileiro do INPE, modelado com SystemC-AMS. SystemC-AMS, que ? uma biblioteca de classes C++ dedicada ? simula??o de sistemas heterog?neos, oferece um recurso poderoso para descrever modelos nos dom?nios digital, anal?gico e de RF, bem como sistemas mec?nicos e ?ticos. O modelo projetado foi dividido em quatro partes. O primeiro bloco leva em considera??o a ?rbita do sat?lite, necess?rio para modelar corretamente o canal, inclui o efeito Doppler, a atenua??o e o ru?do t?rmico. O bloco identifica??o que detecta a presen?a de sat?lite ? composto por um amplificador de baixo ru?do, filtro passa-banda, detector de pot?ncia e um comparador l?gico. O bloco controlador ? respons?vel por habilitar o transmissor RF, quando a presen?a do sat?lite ? detectada. O controlador foi modelado por uma rede de Petri, devido ? natureza ass?ncrona do sistema. O quarto bloco ? o transmissor, que realiza a modula??o da informa??o em BPSK ?60o. Este bloco ? composto por oscilador, misturadores, somador e amplificador. Todo o sistema foi simulado simultaneamente. Os resultados ser?o utilizados para especificar componentes de sistema e para a elabora??o de banco de testes para a verifica??o do projeto
270

La Matriz de Capacidades y Desempeños (MCD) y el Algoritmo del Desarrollo Humano (ADH) / La Matriz de Capacidades y Desempeños (MCD) y el Algoritmo del Desarrollo Humano (ADH)

Gonzales de Olarte, Efraín 10 April 2018 (has links)
Given that human development is a complex process involving multiple components and determiningfactors, multidimensional indicators are needed. On the basis of the extensive literatureon the subject, we advance two new indicators: the Matrix of Capabilities and Functioning(MCF), and the Algorithm of Human Development (HDA).The MCF is composed of vectors of capabilities and functionings, based on Sen’s idea of RefinedFunctionings. It is based in a matricial framework, both static and dynamic. The main purposeof constructing this index is to study how different sets of capabilities relate to alternativefunctionings, to produce diverse outcomes.The Human Development Algorithm (HAD) is a multidimensional index concerning the setof goods and services needed to complete a life cycle. The HDA is a socio-economic contextindicator. It is composed of the main «satisfactors» or basic goods and services needed: food,health, education, housing, social security, decent employment and retirement programs, thatmight be available to all throughout of life cycle. This indicator shows the material progressreached by each country or region as well the institutional organization, private and public, andthe degree of social cohesion and solidarity. / Dado que el desarrollo humano es un proceso complejo que tiene múltiples componentes ydeterminantes, es necesario tener indicadores multidimensionales tales como la Matriz de Capacidadesy Desempeños (MCD) y el Algoritmo del Desarrollo Humano (ADH).El MCD está compuesto de vectores de capacidades y de desempeños, basado en la idea de Sensobre desempeños refinados. Está basado en un marco conceptual matricial, tanto estático comodinámico. El propósito principal de la construcción del índice es estudiar cómo diferentes conjuntosde capacidades con desempeños pueden producir distintos resultados.El ADH es un índice multidimensional relacionado con el conjunto de bienes y servicios que senecesitan para completar un ciclo de vida. El ADH es un indicador de contexto socioeconómico.Está compuesto de los principales satisfactores o bienes y servicios básicos que se requieren: alimentos,salud, educación, vivienda, seguridad social, empleo decente y sistema de pensiones, quedeben estar disponibles durante todo el ciclo de vida. Este indicador muestra el progreso materialalcanzado por cada país o región, tanto como la organización institucional pública y privada, yel grado de cohesión social.

Page generated in 0.138 seconds