• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 18
  • 6
  • 5
  • 4
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 73
  • 21
  • 16
  • 15
  • 14
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Le test unifié de cartes appliqué à la conception de systèmes fiables

Lubaszewski, Marcelo Soares January 1994 (has links)
Si on veut assurer de fawn efficace les tests de conception, de fabrication, de maintenance et le test accompli au cours de l'application pour les systemes electroniques, on est amend a integrer le test hors-ligne et le test en-ligne dans des circuits. Ensuite, pour que les systemes complexes tirent profit des deux types de tests, une telle unification doit etre &endue du niveau circuit aux niveaux carte et module. D'autre part, bien que rintegration des techniques de test hors-ligne et en-ligne fait qu'il est possible de concevoir des systemes pour toute application securitaire, le materiel ajoute pour assurer une haute siirete de fonctionnement fait que la fiabilite de ces systemes est reduite, car la probabilite d'occurrence de fautes augmente. Confront& a ces deux aspects antagoniques, cette these se fixe l'objectif de trouver un compromis entre la securite et la fiabilite de systemes electroniques complexes. Ainsi, dans un premier temps, on propose une solution aux problemes de test hors-ligne et de diagnostic qui se posent dans les &apes intermediaires de revolution vers les cartes 100% compatibles avec le standard IEEE 1149.1 pour le test "boundary scan". Une approche pour le BIST ("Built-In Self-Test") des circuits et connexions "boundary scan" illustre ensuite retape ultime du test hors-ligne de cartes. Puis, le schema UBIST ("Unified BIST") - integrant les techniques BIST et "self-checking" pour le test en-ligne de circuits, est combine au standard IEEE 1149.1, afin d'obtenir une strategie de conception en vue du test unifie de connexions et circuits montes sur des cartes et modules. Enfin, on propose un schema tolerant les fautes et base sur la duplication de ces modules securitaires qui assure la competitivite du systeme resultant du point de vue de la fiabilite, tout en gardant sa silrete inherente. / On one hand, if the goal is to ensure that the design validation, the manufacturing and the maintenance testing, along with the concurrent error detection are efficiently performed in electronic systems, one is led to integrate the off-line and the on-line testing into circuits. Then, for complex systems to make profit of these two types of tests, such unification must be extended from the circuit to the board and module levels. On the other hand, although the unification of off-line and on-line testing techniques makes possible the design of systems suiting any safety application, the hardware added for increasing the application safety also decreases the system reliability, since the probability of occurrence of faults increases. Faced to these two antagonist aspects, this thesis aims at finding a compromise between the safety and the reliability of complex electronic systems. Thus, firstly we propose a solution to the off-line test and diagnosis problems found in the intermediate steps in the evolution towards boards which are 100% compliant with the IEEE standard 1149.1 for boundary scan testing. An approach for the BIST (Built-In Self-Test) of boundary scan circuits and interconnects then illustrates the ultimate step in the board off-line testing. Next, the UBIST (Unified BIST) scheme - merging BIST and self-checking capabilities for circuit on-line testing, is combined with the IEEE standard 1149.1, in order to obtain a design strategy for unifying the tests of interconnects and circuits populating boards and modules. Finally, we propose a fault-tolerant scheme based on the duplication of these kind of modules which ensures the competitivity of the resulting system in terms of reliability at the same time as preserving the inherent module safety.
52

Le test unifié de cartes appliqué à la conception de systèmes fiables

Lubaszewski, Marcelo Soares January 1994 (has links)
Si on veut assurer de fawn efficace les tests de conception, de fabrication, de maintenance et le test accompli au cours de l'application pour les systemes electroniques, on est amend a integrer le test hors-ligne et le test en-ligne dans des circuits. Ensuite, pour que les systemes complexes tirent profit des deux types de tests, une telle unification doit etre &endue du niveau circuit aux niveaux carte et module. D'autre part, bien que rintegration des techniques de test hors-ligne et en-ligne fait qu'il est possible de concevoir des systemes pour toute application securitaire, le materiel ajoute pour assurer une haute siirete de fonctionnement fait que la fiabilite de ces systemes est reduite, car la probabilite d'occurrence de fautes augmente. Confront& a ces deux aspects antagoniques, cette these se fixe l'objectif de trouver un compromis entre la securite et la fiabilite de systemes electroniques complexes. Ainsi, dans un premier temps, on propose une solution aux problemes de test hors-ligne et de diagnostic qui se posent dans les &apes intermediaires de revolution vers les cartes 100% compatibles avec le standard IEEE 1149.1 pour le test "boundary scan". Une approche pour le BIST ("Built-In Self-Test") des circuits et connexions "boundary scan" illustre ensuite retape ultime du test hors-ligne de cartes. Puis, le schema UBIST ("Unified BIST") - integrant les techniques BIST et "self-checking" pour le test en-ligne de circuits, est combine au standard IEEE 1149.1, afin d'obtenir une strategie de conception en vue du test unifie de connexions et circuits montes sur des cartes et modules. Enfin, on propose un schema tolerant les fautes et base sur la duplication de ces modules securitaires qui assure la competitivite du systeme resultant du point de vue de la fiabilite, tout en gardant sa silrete inherente. / On one hand, if the goal is to ensure that the design validation, the manufacturing and the maintenance testing, along with the concurrent error detection are efficiently performed in electronic systems, one is led to integrate the off-line and the on-line testing into circuits. Then, for complex systems to make profit of these two types of tests, such unification must be extended from the circuit to the board and module levels. On the other hand, although the unification of off-line and on-line testing techniques makes possible the design of systems suiting any safety application, the hardware added for increasing the application safety also decreases the system reliability, since the probability of occurrence of faults increases. Faced to these two antagonist aspects, this thesis aims at finding a compromise between the safety and the reliability of complex electronic systems. Thus, firstly we propose a solution to the off-line test and diagnosis problems found in the intermediate steps in the evolution towards boards which are 100% compliant with the IEEE standard 1149.1 for boundary scan testing. An approach for the BIST (Built-In Self-Test) of boundary scan circuits and interconnects then illustrates the ultimate step in the board off-line testing. Next, the UBIST (Unified BIST) scheme - merging BIST and self-checking capabilities for circuit on-line testing, is combined with the IEEE standard 1149.1, in order to obtain a design strategy for unifying the tests of interconnects and circuits populating boards and modules. Finally, we propose a fault-tolerant scheme based on the duplication of these kind of modules which ensures the competitivity of the resulting system in terms of reliability at the same time as preserving the inherent module safety.
53

雷曼兄弟倒閉對美國金融機構報酬率的影響 / The Impact of the Bankruptcy of Lehman Brothers on the Stock Returns of US Financial Institutions

郭惠萍, Kuo, Huei Ping Unknown Date (has links)
本研究探討雷曼兄弟倒閉事件對美國金融機構股價報酬率所帶來的傳染效果。我們的研究結果顯示,持有雷曼兄弟股份的金融機構受到較顯著的影響,而且在股東當中以商業銀行及投資銀行所受到之衝擊最為顯著,而投資銀行受影響之程度又高於商業銀行。我們也發現,金融機構對雷曼兄弟的持股比例愈高,其股價受到雷曼倒閉危機影響之程度亦愈高。此外,一些規模較小的金融機構在一些事件中亦呈現了顯著的反應,顯示在某種程度上,雷曼兄弟危機事件在金融產業當中引發了傳染效果(Contagion effect)。我們的實證結果也顯示,美國政府此次未介入救援雷曼兄弟的做法,被市場解讀為金融機構不再是「太大而不能倒(too-big-to-fail)」。 / We examine the contagion effect of Lehman Brother’s bankruptcy on US financial institutions’ stock returns. Our results show that financial institutions which held Lehman’s shares were affected more significantly. Furthermore, within Lehman’s shareholders, commercial banks and investment banks were affected most significantly, and impacts on investment banks were more significantly than commercial banks. We also find that the higher financial institution’s ownership percentage of Lehman was, the more its stock price was affected. Besides, some smaller financial institutions were also influenced significantly in some events, to some extent, implying a contagion effect in the financial sector. Our empirical results also indicate that the way that US government not to rescue Lehman Brother was perceived by the market that no financial institutions are too-big-to-fail.
54

Management of project failures in the gaming industry : The normalization approach

Mahamud, Abdirahman, Khayre, Abdimajid, Bergholm, Paula January 2019 (has links)
In creative industries such as the gaming industry, the failure rate is typically higher in relation to many other industries. This is usually due to the constant need of innovation and the extreme competition in the industry of gaming. Firms in this industry take on multiple innovation projects, which inherently have a high rate of failure. Literature has previously stressed and focused on the importance of failure and how it can enhance learning that can be a crucial asset for any organization. However, failure brings along negative emotions that can slow down or block the learning process of an individual or an organization at large. In an industry where failure is common, it is important for the management to tackle this issue. Therefore, the purpose of this thesis is to explore the approach the management of small gaming firms take in order to normalize failure. In this study, the data has been collected qualitatively while using a thematic analysis to recognize consistent themes and patterns, which arise from the primary data that was collected. By conducting four semi-structured interviews with two different companies (2 interviews each), we found that both companies have a similar attitude regarding project failure. Both companies either expect failure to happen or even encourage it. One of our key findings was that both companies emphasize failing fast, which allows them to save time, money and resources as well as helps some members of the organization to react less emotionally to the termination of a project. Empirical results were then discussed and analyzed by judging whether the actions these companies took can be classified as a way of normalizing failure. We concluded that there was evidence for management employing various methods of action that would eventually lead to normalization of failure. Some of these actions included the fail fast attitude, failure supportive slogans and the thought of planning for failure beforehand.
55

EVALUATING THE IMPORTANCE OF A STRUCTURED METHODOLOGY BY MANAGEMENT OF CRITICAL RISK/FAILURE FACTORS IN ERP IMPLEMENTATION

Bayir, Arzu, Shetty, Bhavya January 2011 (has links)
Studies in recent years have revealed the challenges involved in deploying ERP solutions due to its complexity. Before attempting to implement ERP systems, it is essential to study various aspects such as project management, training, and change management in detail to manage the associated risks. When an ERP project is undertaken with insufficient planning, it may result in failure to integrate business processes and in substantial financial loss. Research has been pursued to identify critical risk/failure factors that may arise during implementation and the measures that should be taken to manage them. However, there is lack of research in identifying the management of critical risk/failure factor using a structured methodology. This raises a question of ‘can a structured methodology identify and manage critical risk/failure factors and support deploying ERP solutions with a better quality?’ A study of Microsoft Sure Step Methodology is performed to identify critical risk/failure factors that frequently occur during ERP implementation. These factors are derived from 8 articles. On determining critical risk/failure factors, we investigated if Sure Step methodology likely contains procedures that approach these factors.
56

Development of an energy efficient, robust and modular multicore wireless sensor network

Shi, Hong-Ling 23 January 2014 (has links) (PDF)
The wireless sensor network is a key technology in the 21st century because it has multitude applications and it becomes the new way of interaction between physical environment and computer system. Moreover, the wireless sensor network is a high resource constraint system. Consequently, the techniques used for the development of traditional embedded systems cannot be directly applied. Today wireless sensor nodes were implemented by using only one single processor architecture. This approach does not achieve a robust and efficient energy wireless sensor network for applications such as precision agriculture (outdoor) and telemedicine. The aim of this thesis is to develop a new approach for the realization of a wireless sensor network node using multicore architecture to enable to increase both its robustness and lifetime (reduce energy consumption).
57

Localización de faltas en sistemas de distribución de energía eléctrica usando métodos basados en el modelo y métodos basados en el conocimiento

Mora Flórez, Juan José 15 December 2006 (has links)
La calidad de energía eléctrica incluye la calidad del suministro y la calidad de la atención al cliente. La calidad del suministro a su vez se considera que la conforman dos partes, la forma de onda y la continuidad. En esta tesis se aborda la continuidad del suministro a través de la localización de faltas. Este problema se encuentra relativamente resuelto en los sistemas de transmisión, donde por las características homogéneas de la línea, la medición en ambos terminales y la disponibilidad de diversos equipos, se puede localizar el sitio de falta con una precisión relativamente alta. En sistemas de distribución, sin embargo, la localización de faltas es un problema complejo y aún no resuelto. La complejidad es debida principalmente a la presencia de conductores no homogéneos, cargas intermedias, derivaciones laterales y desbalances en el sistema y la carga. Además, normalmente, en estos sistemas sólo se cuenta con medidas en la subestación, y un modelo simplificado del circuito. Los principales esfuerzos en la localización han estado orientados al desarrollo de métodos que utilicen el fundamental de la tensión y de la corriente en la subestación, para estimar la reactancia hasta la falta. Como la obtención de la reactancia permite cuantificar la distancia al sitio de falta a partir del uso del modelo, el Método se considera Basado en el Modelo (MBM). Sin embargo, algunas de sus desventajas están asociadas a la necesidad de un buen modelo del sistema y a la posibilidad de localizar varios sitios donde puede haber ocurrido la falta, esto es, se puede presentar múltiple estimación del sitio de falta.Como aporte, en esta tesis se presenta un análisis y prueba comparativa entre varios de los MBM frecuentemente referenciados. Adicionalmente se complementa la solución con métodos que utilizan otro tipo de información, como la obtenida de las bases históricas de faltas con registros de tensión y corriente medidos en la subestación (no se limita solamente al fundamental). Como herramienta de extracción de información de estos registros, se utilizan y prueban dos técnicas de clasificación (LAMDA y SVM). Éstas relacionan las características obtenidas de la señal, con la zona bajo falta y se denominan en este documento como Métodos de Clasificación Basados en el Conocimiento (MCBC). La información que usan los MCBC se obtiene de los registros de tensión y de corriente medidos en la subestación de distribución, antes, durante y después de la falta. Los registros se procesan para obtener los siguientes descriptores: a) la magnitud de la variación de tensión ( dV ), b) la variación de la magnitud de corriente ( dI ), c) la variación de la potencia ( dS ), d) la reactancia de falta ( Xf ), e) la frecuencia del transitorio ( f ), y f) el valor propio máximo de la matriz de correlación de corrientes (Sv), cada uno de los cuales ha sido seleccionado por facilitar la localización de la falta. A partir de estos descriptores, se proponen diferentes conjuntos de entrenamiento y validación de los MCBC, y mediante una metodología que muestra la posibilidad de hallar relaciones entre estos conjuntos y las zonas en las cuales se presenta la falta, se seleccionan los de mejor comportamiento.Los resultados de aplicación, demuestran que con la combinación de los MCBC con los MBM, se puede reducir el problema de la múltiple estimación del sitio de falta. El MCBC determina la zona de falta, mientras que el MBM encuentra la distancia desde el punto de medida hasta la falta, la integración en un esquema híbrido toma las mejores características de cada método. En este documento, lo que se conoce como híbrido es la combinación de los MBM y los MCBC, de una forma complementaria. Finalmente y para comprobar los aportes de esta tesis, se propone y prueba un esquema de integración híbrida para localización de faltas en dos sistemas de distribución diferentes. Tanto los métodos que usan los parámetros del sistema y se fundamentan en la estimación de la impedancia (MBM), como aquellos que usan como información los descriptores y se fundamentan en técnicas de clasificación (MCBC), muestran su validez para resolver el problema de localización de faltas. Ambas metodologías propuestas tienen ventajas y desventajas, pero según la teoría de integración de métodos presentada, se alcanza una alta complementariedad, que permite la formulación de híbridos que mejoran los resultados, reduciendo o evitando el problema de la múltiple estimación de la falta. / Power quality includes the supply and customer support quality. The supply quality considers two aspects, the wave shape and continuity. In this thesis the fault location problem, topic related with supply continuity is considered. Fault location is a relatively solved problem in power transmission systems, due to the homogeneous characteristics of the power line, measurements in both terminals and also the availability equipment such as fault locators normally included in distance relays. However, in power distribution systems the fault location is a complex problem which remains unsolved. The complexity is mainly because the presence of laterals, load taps, non homogeneous conductors, unbalances in the system and load. In addition, these power systems only have measurements at the substation and a simplified model of the power network. The main efforts to solve this problem have been oriented to the development of impedance based methods. Because of the reactance estimation makes possible the estimation of the distance to the faulted node by using the circuit model, those methods are considered as Model Based Methods (MBM). However the main drawbacks are related to the requirement of a good system model and to the possibility of multiple estimation of the location of the fault due to the three-shape of such networks. As a result, in this thesis an analysis and a comparative test between several MBM frequently cited is presented. In addition, the solution of the fault location is complemented by using methods which use more than the rms values of current and voltage obtained from fault databases. As tool to relate this information with the fault location, two classification techniques are used and tested (LAMDA and SVM). These relate the voltage and current characteristics to the faulted zone and are denoted in this document as Classification Methods Based on the Knowledge (CMBK).The information used by CMBK is obtained from current and voltage fault registers measured at the distribution substation, before, during and after the fault. These registers are pre-processed to obtain the following characteristics or descriptors: a) The magnitude of the voltage variation between the steady states of fault and pre-fault ( dV ), b) the magnitude of the current variation between the steady states of fault and pre-fault ( dI ), c) the magnitude of the apparent power variation between the steady states of fault and pre-fault ( dS ), d) the magnitude of the reactance as seen from the substation ( Xf ), e) the frequency of the transient caused by the fault ( f ), and f) the maximum eigenvalue of the correlation matrix of the currents ( Sv ). By using these descriptors, several training and validation sets were used with CMBK and by means of a proposed methodology it is shown how to relate these sets with the faulted zone and also to select those which offer the best performances.The application results demonstrate how by combining the MBM with the CMBK it is possible to reduce the multiple estimation of the fault location. The CMBK is used to determine the faulted zone while the MBM finds the distance from the measurement point to the faulted node. Thus the integration in a hybrid approach uses the best characteristics of each method. In this document, the term hybrid is used to describe the complementary combination of MBM and CMBK. Finally and aimed to compare the thesis results, an integration hybrid scheme to fault location is proposed and tested in two different power distribution systems. Both, methods which use the system parameters and are based on the impedance estimation (MBM), and those which use the information represented by the signal descriptors and are based in classification techniques (CMBK) have shown the capability to solve the problem of fault location. The two proposed methodologies have advantages and drawbacks, but according to the integration theory presented, high complementarity has been reached. This makes possible the development of a hybrid approach used to avoid or reduces the multiple estimation of the fault location.
58

Mecanismos de segunda geração e o novo standard internacional de regimes especiais bancários

Arruda, Daniel Sivieri 15 February 2017 (has links)
Submitted by Dani Arruda (arrudadani@gmail.com) on 2017-03-31T18:01:38Z No. of bitstreams: 1 Dissertação Daniel Sivieri Arruda - Final 31.03.2017.pdf: 2886477 bytes, checksum: e7db064cbaa7bb943ecd1527d65cc458 (MD5) / Approved for entry into archive by Leiliane Silva (leiliane.silva@fgv.br) on 2017-03-31T19:20:28Z (GMT) No. of bitstreams: 1 Dissertação Daniel Sivieri Arruda - Final 31.03.2017.pdf: 2886477 bytes, checksum: e7db064cbaa7bb943ecd1527d65cc458 (MD5) / Made available in DSpace on 2017-04-12T19:00:31Z (GMT). No. of bitstreams: 1 Dissertação Daniel Sivieri Arruda - Final 31.03.2017.pdf: 2886477 bytes, checksum: e7db064cbaa7bb943ecd1527d65cc458 (MD5) Previous issue date: 2017-02-15 / Os eventos financeiros do período 2007-2009 - crise do subprime - mostraram algumas das fragilidades das instituições financeiras. Os mecanismos de resolução bancária – ferramentas de reestruturação de instituições financeiras realizada pela autoridade de resolução, para garantir a continuidade das funções em crise, preservação da estabilidade financeira e reestruturação da viabilidade financeira total ou em parte - até então existentes não foram capazes de resolver o problema das instituições 'too big to fail'. O governo americano, assim como de outros países, foi obrigado a realizar um grande programa de resgate com utilização de recursos públicos, o bailout. Na tentativa de evitar o uso de recursos públicos, o Financial Stability Board, implementou novos mecanismos de resolução bancária com vistas a incentivar soluções de mercado, o bail-in, em oposição ao bail-out. O presente trabalho aborda as discussões envolvendo a regulação do sistema financeiro, a incapacidade dos mecanismos de primeira geração em lidar com a crise do subprime, e os instrumentos criados pós crise. Assim, versa sobre os motivos que levam a necessidade de se regular bancos, os problemas das instituições 'too big to fail' e a necessidade de criar novos mecanismos de resolução bancária para instituições financeiras em dificuldades. Nesse sentido, aborda a estrutura e a aplicação das normas criadas pós crise financeira do subprime, em especial os Key Attributes of Effective Resolution Regimes of Financial Institutions, elaborado pelo FSB, que estabelecem os standards internacionais para resolução de instituições financeiras. O trabalho analisa a agenda de reforma internacional, em especial a que ocorre nos EUA e Europa. Para isso, aborda, também, os instrumentos criados pela BRRD, na Europa, e o Dodd-Frank Act, dos EUA, mostrando suas características e diferenças. Por fim, ao concluir, analisa que os instrumentos criados fazem parte de um grande consenso internacional sobre os planos de resolução e recuperação dos bancos, bem como o papel do regulador bancário em reação aos eventos financeiros recentes. A questão sobre se os planos irão contribuir significativamente para garantir a resolvabilidade de grandes instituições financeiras sistemicamente relevantes ainda é algo em aberto. A complexidade da inovação financeira e das instituições podem dificultar uma avaliação mais precisa sobre a efetividade dos planos de resolução.
59

Using Regression-Based Effect Size Meta-Analysis to Investigate Coral Responses to Climate Change

Kornder, Niklas Alexander 15 July 2016 (has links)
Attempts to quantify the effects of ocean acidification and warming (OAW) on scleractinian corals provide a growing body of response measurements. However, placing empirical results into an ecological context is challenging, owing to variations that reflect both natural heterogeneity and scientific bias. This study addresses the heterogeneity of climate change induced changes in coral recruitment and calcification. To discern scientific bias and identify drivers of the remaining heterogeneity, 100 publications were analyzed using a combination of weighted mixed effects meta-regression and factorial effect size meta‑analysis. A linear model was applied to quantify the variation caused by differing stress levels across studies. The least squares predictions were then used to standardize individual study outcomes and effect size meta-analysis was performed on original and standardized outcomes separately. On average, increased temperature significantly reduces larval survival, while ocean acidification impedes settlement and calcification. Coral resistance to OAW is likely governed by biological traits (genera and life cycle stage), environmental factors (abiotic variability) and experimental design (feeding regime, stressor magnitude, and exposure duration). Linear models suggest that calcification rates are driven by carbonate and bicarbonate concentrations, which act additively with warming. Standardizing outcomes to linear model predictions proved useful in discerning strong sources of scientific bias. The approach used in this study can improve modelling projections and inform policy and management on changes in coral community structure associated with the expected future intensification of OAW.
60

Development of an energy efficient, robust and modular multicore wireless sensor network / Développement d’un capteur multicoeur sans fil à énergie efficient, robuste et modulaire

Shi, Hong-Ling 23 January 2014 (has links)
Le réseau de capteurs sans fil est une technologie clé du 21ème siècle car ses applications sont nombreuses et diverses. Cependant le réseau de capteurs sans fil est un système à très forte contrainte de ressources. En conséquence, les techniques utilisées pour le développement des systèmes embarqués classiques ne peuvent être appliquées. Aujourd’hui les capteurs sans fil ont été réalisés en utilisant une architecture monoprocesseur. Cette approche ne permet pas de réaliser un capteur sans fil robuste et à énergie efficiente pour les applications telles que agriculture de précision (en extérieur) et télémédecine. Les travaux menés dans le cadre de cette thèse ont pour but de développer une nouvelle approche pour la réalisation d’un capteur sans fil en utilisant une architecture multicoeur pour permettre à la fois d’augmenter sa robustesse et sa durée de vie (minimiser sa consommation énergétique). / The wireless sensor network is a key technology in the 21st century because it has multitude applications and it becomes the new way of interaction between physical environment and computer system. Moreover, the wireless sensor network is a high resource constraint system. Consequently, the techniques used for the development of traditional embedded systems cannot be directly applied. Today wireless sensor nodes were implemented by using only one single processor architecture. This approach does not achieve a robust and efficient energy wireless sensor network for applications such as precision agriculture (outdoor) and telemedicine. The aim of this thesis is to develop a new approach for the realization of a wireless sensor network node using multicore architecture to enable to increase both its robustness and lifetime (reduce energy consumption).

Page generated in 0.0275 seconds