251 |
Optimización de arquitecturas distribuidas para el procesado de datos masivosHerrera Hernández, José 02 September 2020 (has links)
Tesis por compendio / [ES] La utilización de sistemas para el tratamiento eficiente de grandes volúmenes de información ha crecido en popularidad durante los últimos años. Esto conlleva el desarrollo de nuevas tecnologías, métodos y algoritmos, que permitan un uso eficiente de las infraestructuras. El tratamiento de grandes volúmenes de información no está exento de numerosos problemas y retos, algunos de los cuales se tratarán de mejorar. Dentro de las posibilidades actuales debemos tener en cuenta la evolución que han tenido los sistemas durante los últimos años y las oportunidades de mejora que existan en cada uno de ellos.
El primer sistema de estudio, el Grid, constituye una aproximación inicial de procesamiento masivo y representa uno de los primeros sistemas distribuidos de tratamiento de grandes conjuntos de datos. Participando en la modernización de uno de los mecanismos de acceso a los datos se facilita la mejora de los tratamientos que se realizan en la genómica actual. Los estudios que se presentan están centrados en la transformada de Burrows-Wheeler, que ya es conocida en el análisis genómico por su capacidad para mejorar los tiempos en el alineamiento de cadenas cortas de polinucleótidos. Esta mejora en los tiempos, se perfecciona mediante la reducción de los accesos remotos con la utilización de un sistema de caché intermedia que optimiza su ejecución en un sistema Grid ya consolidado. Esta caché se implementa como complemento a la librería de acceso estándar GFAL utilizada en la infraestructura de IberGrid.
En un segundo paso se plantea el tratamiento de los datos en arquitecturas de Big Data. Las mejoras se realizan tanto en la arquitectura Lambda como Kappa mediante la búsqueda de métodos para tratar grandes volúmenes de información multimedia. Mientras que en la arquitectura Lambda se utiliza Apache Hadoop como tecnología para este tratamiento, en la arquitectura Kappa se utiliza Apache Storm como sistema de computación distribuido en tiempo real. En ambas arquitecturas se amplía el ámbito de utilización y se optimiza la ejecución mediante la aplicación de algoritmos que mejoran los problemas en cada una de las tecnologías.
El problema del volumen de datos es el centro de un último escalón, por el que se permite mejorar la arquitectura de microservicios. Teniendo en cuenta el número total de nodos que se ejecutan en sistemas de procesamiento tenemos una aproximación de las magnitudes que podemos obtener para el tratamiento de grandes volúmenes. De esta forma, la capacidad de los sistemas para aumentar o disminuir su tamaño permite un gobierno óptimo. Proponiendo un sistema bioinspirado se aporta un método de autoescalado dinámico y distribuido que mejora el comportamiento de los métodos comúnmente utilizados frente a las circunstancias cambiantes no predecibles.
Las tres magnitudes clave del Big Data, también conocidas como V's, están representadas y mejoradas: velocidad, enriqueciendo los sistemas de acceso de datos por medio de una reducción de los tiempos de tratamiento de las búsquedas en los sistemas Grid bioinformáticos; variedad, utilizando sistemas multimedia menos frecuentes que los basados en datos tabulares; y por último, volumen, incrementando las capacidades de autoescalado mediante el aprovechamiento de contenedores software y algoritmos bioinspirados. / [CA] La utilització de sistemes per al tractament eficient de grans volums d'informació ha crescut en popularitat durant els últims anys. Açò comporta el desenvolupament de noves tecnologies, mètodes i algoritmes, que aconsellen l'ús eficient de les infraestructures. El tractament de grans volums d'informació no està exempt de nombrosos problemes i reptes, alguns dels quals es tractaran de millorar. Dins de les possibilitats actuals hem de tindre en compte l'evolució que han tingut els sistemes durant els últims anys i les ocasions de millora que existisquen en cada un d'ells. El primer sistema d'estudi, el Grid, constituïx una aproximació inicial de processament massiu i representa un dels primers sistemes de tractament distribuït de grans conjunts de dades. Participant en la modernització d'un dels mecanismes d'accés a les dades es facilita la millora dels tractaments que es realitzen en la genòmica actual. Els estudis que es presenten estan centrats en la transformada de Burrows-Wheeler, que ja és coneguda en l'anàlisi genòmica per la seua capacitat per a millorar els temps en l'alineament de cadenes curtes de polinucleòtids. Esta millora en els temps, es perfecciona per mitjà de la reducció dels accessos remots amb la utilització d'un sistema de memòria cau intermèdia que optimitza la seua execució en un sistema Grid ja consolidat. Esta caché s'implementa com a complement a la llibreria d'accés estàndard GFAL utilitzada en la infraestructura d'IberGrid. En un segon pas es planteja el tractament de les dades en arquitectures de Big Data. Les millores es realitzen tant en l'arquitectura Lambda com a Kappa per mitjà de la busca de mètodes per a tractar grans volums d'informació multimèdia. Mentre que en l'arquitectura Lambda s'utilitza Apache Hadoop com a tecnologia per a este tractament, en l'arquitectura Kappa s'utilitza Apache Storm com a sistema de computació distribuït en temps real. En ambdós arquitectures s'àmplia l'àmbit d'utilització i s'optimitza l'execució per mitjà de l'aplicació d'algoritmes que milloren els problemes en cada una de les tecnologies. El problema del volum de dades és el centre d'un últim escaló, pel qual es permet millorar l'arquitectura de microserveis. Tenint en compte el nombre total de nodes que s'executen en sistemes de processament tenim una aproximació de les magnituds que podem obtindre per al tractaments de grans volums. D'aquesta manera la capacitat dels sistemes per a augmentar o disminuir la seua dimensió permet un govern òptim. Proposant un sistema bioinspirat s'aporta un mètode d'autoescalat dinàmic i distribuït que millora el comportament dels mètodes comunment utilitzats enfront de les circumstàncies canviants no predictibles. Les tres magnituds clau del Big Data, també conegudes com V's, es troben representades i millorades: velocitat, enriquint els sistemes d'accés de dades per mitjà d'una reducció dels temps de tractament de les busques en els sistemes Grid bioinformàtics; varietat, utilitzant sistemes multimèdia menys freqüents que els basats en dades tabulars; i finalment, volum, incrementant les capacitats d'autoescalat per mitjà de l'aprofitament de contenidors i algoritmes bioinspirats. / [EN] The use of systems for the efficient treatment of large data volumes has grown in popularity during the last few years. This has led to the development of new technologies, methods and algorithms to efficiently use of infrastructures. The Big Data treatment is not exempt from numerous problems and challenges, some of which will be attempted to improve. Within the existing possibilities, we must take into account the evolution that systems have had during the last years and the improvement that exists in each one.
The first system of study, the Grid, constitutes an initial approach of massive distributed processing and represents one of the first treatment systems of big data sets. By researching in the modernization of the data access mechanisms, the advance of the treatments carried out in current genomics is facilitated. The studies presented are centred on the Burrows-Wheeler Transform, which is already known in genomic analysis for its ability to improve alignment times of short polynucleotids chains. This time, the update is enhanced by reducing remote accesses by using an intermediate cache system that optimizes its execution in an already consolidated Grid system. This cache is implemented as a GFAL standard file access library complement used in IberGrid infrastructure.
In a second step, data processing in Big Data architectures is considered. Improvements are made in both the Lambda and Kappa architectures searching for methods to process large volumes of multimedia information. For the Lambda architecture, Apache Hadoop is used as the main processing technology, while for the Kappa architecture, Apache Storm is used as a real time distributed computing system. In both architectures the use scope is extended and the execution is optimized applying algorithms that improve problems for each technology.
The last step is focused on the data volume problem, which allows the improvement of the microservices architecture. The total number of nodes running in a processing system provides a measure for the capacity of processing large data volumes. This way, the ability to increase and decrease capacity allows optimal governance. By proposing a bio-inspired system, a dynamic and distributed self-scaling method is provided improving common methods when facing unpredictable workloads.
The three key magnitudes of Big Data, also known as V's, will be represented and improved: speed, enriching data access systems by reducing search processing times in bioinformatic Grid systems; variety, using multimedia data less used than tabular data; and finally, volume, increasing self-scaling capabilities using software containers and bio-inspired algorithms. / Herrera Hernández, J. (2020). Optimización de arquitecturas distribuidas para el procesado de datos masivos [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/149374 / Compendio
|
252 |
Trust Management Systems: Reference Architecture and PersonalizationRashad, Hisham S. M. 20 September 2017 (has links)
Trust is the cornerstone of success in any relationship between two or more parties. Generally, we do not socialize, seek advice, consult, cooperate, buy or sell goods and services from/to others unless we establish some level of mutual trust between interacting parties. When e-commerce was merging infancy, the concept of trusting an entity in a virtual world was a huge obstacle. Gradually, increasingly-sophisticated, largely generic reputation scoring and management systems were embedded into the evolving marketplaces. Current technologies to include cloud computing, social networking, and mobile applications, coupled with the explosion in storage capacity and processing power, are evolving large-scale global marketplaces for a wide variety of resources and services, such as Amazon.com, BitTorrent, WebEx and Skype. In such marketplaces, user entities, or users for short; namely, consumers, providers and brokers, are largely autonomous with vastly diverse requirements, capabilities and trust profiles. Users' requirements may include service quality levels, cost, ease of use, etc. Users' capabilities may include assets owned or outsourced. Trustors' profiles may include what is promised and commitments to keep these promises. In such a large-scale heterogeneous marketplace, the trustworthy interactions and transactions in services and resources constitute a challenging endeavor.
Currently, solving such issues generally adopts a "one-size fits all" trust models and systems. Such approach is limiting due to variations in technology, conflicts between users' requirements and/or conflicts between user requirements and service outcomes. Additionally, this approach may result in service providers being overwhelmed by adding new resources to satisfy all possible requirements, while having no information or guarantees about the level of trust they gain in the network.
Accordingly, we hypothesize the need for personalizable customizable Trust Management Systems (TMSs) for the robustness and wide-scale adoption of large-scale marketplaces for resources and services. Most contemporary TMSs suffer from the following drawbacks:
• Oblivious to diversities in trustors' requirements,
• Primarily utilize feedback and direct or indirect experience as the only form of credentials and trust computations,
• Trust computation methodologies are generally hardcoded and not reconfigurable,
• Trust management operations, which we identify as monitoring, data management, analysis, expectation management, and decision making, are tightly coupled. Such coupling impedes customizability and personalization, and
• Do not consider context in trust computations, where trust perspectives may vary from a context to another.
Given these drawbacks and the large scale of the global marketplace of resources and services, a reference architecture for trust management systems is needed, which can incorporate current systems and may be used in guidance and development of a wide spectrum of trust management systems ranging from un-personalized to fully personalized systems. Up to our knowledge, no TMS reference architecture exists in the literature.
In this dissertation, we propose a new Reference Architecture for Trust Management (RATM). The proposed reference architecture applies separation of concern among trust management operations; namely, decision expectation, analytics, data management and monitoring. RATM defines trust management operations through five reconfigurable components which collectively can be used to implement a wide spectrum of trust management systems ranging from generic to highly personalized systems. We used RATM for trust personalization, where we propose a Personalized Trust Management System (PTMS) based on RATM. We evaluated PTMS's scalability and demonstrated its effectiveness, efficiency and resilience by contrasting against a Generic Trust Management System (GTMS). We used two case studies for our evaluations; namely, BitTorrent and a video conferencing application.
Intellectual Merit
In this work, we propose RATM, a reference architecture for trust management systems that can be used to implement a wide variety of trust management systems ranging from generic systems (un-personalized) to highly personalized systems. We assume service-based environment where consumers, brokers and service providers are interacting and transacting in services and resources to satisfying their own trust requirements. We used RATM to implement a personalized trust management system (TMS). The main contributions of this work are as follows:
• Proposing RATM for the guidance and development of a wide spectrum of TMSs ranging from un-personalized to fully personalized systems, and
• Utilizing our RATM to propose and implement a personalized, scalable TMS with varying trust computation models.
Broader Impact
RATM provides reference architecture for trust management which can be used to develop and evaluate a wide spectrum of TMSs. Personalization, in general, paves the road for reaching high levels of satisfaction, where interacting parties' requirements are individually considered and thus consumers are served the best suited service(s). Hence, we claim that PTMS would largely enhance large-scale heterogeneous systems offering services and resources. This could lead to more cooperation, more transactions, more satisfaction, less malicious behavior and lower costs. / PHD / Trust is the cornerstone of success in any relationship between two or more persons. Generally, we do not socialize, seek advice, consult, cooperate, buy or sell goods and services from/to others unless we establish some level of mutual trust between interacting parties. When ecommerce was firstly used, the concept of trusting a service delivered by someone who is not physically in the same place was a huge obstacle. Gradually, more sophisticated, largely generic reputation scoring and management systems were used into the new internet marketplaces. A reputation scoring and management system is a system which collects feedback from different users about service providers in a certain marketplace on the internet and uses them to anticipate future behavior of these providers. Current computer technologies to include cloud computing, social networking, and mobile applications, coupled with the explosion in computer and mobile devises’ storage capacity and processing power, are evolving large-scale global marketplaces offering a wide variety of resources and services to consumers across the globe. Examples include Amazon.com, BitTorrent, WebEx and Skype. In such marketplaces, consumers, providers and brokers, are largely autonomous with vastly diverse requirements, capabilities and trust profiles. By autonomous we mean acting in accordance with one's moral duty rather than one's desires. Users’ requirements may include service quality levels, cost, ease of use, etc. Users’ capabilities may include assets owned or leased from others. Trustors’ profiles may include what is promised and commitments to keep these promises. In such a large-scale marketplace, the trustworthy interactions and transactions in services and resources constitute a challenging endeavor. By trustworthy interaction we mean transactions which deliver results that are accepted by all parties.
Currently, solving such issues of trust generally adopts a “one-size fits all” trust models and systems. By trust models and systems we mean computer programs which perform the reputation scoring and management. i.e. select a single service which can serve all requirements. Such approach is limiting due to variations in technology, conflicts between users’ requirements and/or conflicts between user requirements and service outcomes. Additionally, this approach may result in service providers being overwhelmed by adding new resources to satisfy all possible requirements, while having no information or guarantees about the level of trust they gain in the eye of their consumers.
Accordingly, we hypothesize the need for personalizable customizable Trust Management Systems (TMSs) for the robustness and wide-scale adoption of large-scale marketplaces for resources and services. In other words, we assume the need for a trust management system which can select services satisfying transaction parties’ requirements. Most contemporary TMSs suffer from the following drawbacks:
• Select one size fits all service,
• Utilize one and only one type of data for calculating the score used for anticipating the future behavior of a party,
• Utilize one and only one method to calculate the score value used for anticipating the future behavior of a party,
• Trust scoring calculation method does cannot be reprogrammed,
• Trust scoring calculation method does not consider the context in which the data was collected.
Given these drawbacks and the large scale of the global marketplace of resources and services, a reference architecture for trust management systems is needed, which can incorporate current systems and may be used in guidance and development of a wide spectrum of trust management systems ranging from un-personalized to fully personalized systems. Up to our knowledge, no TMS reference architecture exists in the literature.
In this dissertation, we propose a new Reference Architecture for Trust Management (RATM), which overcomes the drawbacks of current systems. It proposes evaluating trust by number of flexible operations namely, decision expectation, analytics, data management and monitoring. These operations collectively can be used to implement a wide spectrum of trust management systems ranging from generic to highly personalized systems. We used RATM for trust personalization, where we propose a Personalized Trust Management System (PTMS) based on RATM. We evaluated PTMS’s ability to sustain the increasing number of users and demonstrated its effectiveness, efficiency and its ability to resist attacks. We achieved that by contrasting experimentation results against that of a Generic Trust Management System (GTMS). We used two case studies for our evaluations; namely, BitTorrent and a video conferencing application.
|
253 |
Bio-Inspired Trailing Edge Noise Control: Acoustic and Flow MeasurementsMillican, Anthony J. 09 May 2017 (has links)
Trailing edge noise control is an important problem associated mainly with wind turbines. As turbulence in the air flows over a wind turbine blade, it impacts the trailing edge and scatters, producing noise. Traditional methods of noise control involve modifying the physical trailing edge, or the scattering efficiency. Recently, inspired by the downy covering of owl feathers, researchers developed treatments that can be applied to the trailing edge to significantly reduce trailing edge noise. It was hypothesized that the noise reduction was due to manipulating the incoming turbulence, rather than the physical trailing edge itself, representing a new method of noise control. However, only acoustic measurements were reported, meaning the associated flow physics were still unknown. This thesis describes a comprehensive wall jet experiment to measure the flow effects near the bio-inspired treatments, termed “finlets” and “rails,” and relate those flow effects to the noise reduction. This was done using far-field microphones, a single hot-wire probe, and surface pressure fluctuation microphones. The far-field noise results showed that each treatment successfully reduced the noise, by up to 7 dB in some cases. The surface pressure measurements showed that the spanwise coherence was slightly reduced when the treatments were applied to the trailing edge. The velocity measurements clearly established the presence of a shear layer near the top of the treatments. As a whole, the dataset led to the shear-sheltering hypothesis: the bio-inspired treatments are effective based on reducing the spanwise pressure correlation and by sheltering the trailing edge from turbulent structures with the shear layer they create. / Master of Science / This thesis describes a project aimed at developing a technology inspired by the silent flight of owls, with the end goal of using this technology to reduce the noise generated by wind turbines. Specifically, the phenomenon known as "trailing edge noise" is the primary source of wind turbine noise, and is the noise source of interest here. It occurs when air turbulence (which can be thought of as unsteady air fluctuations) crashes into the rear (trailing) edge of wind turbine blades, scattering and producing noise. Typically, methods of reducing this noise source involve changing the shape of the trailing edge; this may not always be practical for existing wind turbines. Recently, inspired by the downy covering of owl feathers, researchers developed treatments that can be applied directly to the trailing edge, significantly reducing trailing edge noise. This bio-inspired concept was verified with numerous acoustic measurements. Based on those measurements, researchers hypothesized that the noise reduction was achieved by manipulating the incoming turbulence before it scattered off the trailing edge, rather than by changing the existing wind turbine blade, representing a new method of trailing edge noise control. However, as only acoustic measurements (not flow measurements) were reported, the changes in turbulence could not be examined.
With the above motivation in mind, this thesis describes a comprehensive wind tunnel experiment to measure the changes in the aerodynamics and turbulence near the bio-inspired treatments, and relate those changes to the reduction in trailing edge noise. This was done using a hot-wire probe to measure the aerodynamics, as well as microphones to measure the radiated noise and surface pressure fluctuations. As a whole, the experimental results led to the shear-sheltering hypothesis: the bio-inspired treatments are effective based on the creation of a shear layer (a thin region between areas with different air speeds) which shelters the trailing edge from some turbulence, as well as by de-correlating surface pressure fluctuations along the trailing edge.
|
254 |
Conflict, Paradox, and the Role of Structure in True IntelligenceBettendorf, Isaac T. 04 April 2024 (has links)
Novel forms of brain-inspired programming models related to novel computer architecture are required to both understand the mysteries of intelligence as well as break barriers in computational complexity, and computer parallelism. Artificial Intelligence is focused on developing complex programs based on abstract, statistical prediction engines that require large datasets, vast amounts of computational power, and unbounded computation time. By contrast, the brain utilizes relatively few experiences to make decisions in unpredictable, time-constrained situations while utilizing relatively small amounts of physical computational space and power with high degrees of complexity and parallelism. We observe that intelligence requires the accommodation of ambiguity, conflict, and paradox. From a structural perspective, this means the same set of inputs leads to conflicting results that are likely produced in isolated regions of the brain that function independently until an answer must be chosen. We further observe that, unlike computer programs, brains constantly interact with the physical world where external constraints force the selection of the best available response in time-quality trade-offs ranging from fight-or-flight to deep thinking. For example, when intelligent beings are presented with a set of inputs, those inputs can be processed with different levels of thinking, utilizing heterogeneous algorithms to produce answers dependent upon the time available to process them. We introduce the Troop meta-approach, which is a novel meta computer architecture and programming.
Experiments demonstrate our approach in modeling conflict when the same set of inputs are heterogeneously processed independently using maze solving and ordered search in real-world environments with unpredictable, random time constraints. Across one hundred trials, on average, the Troop solution solves mazes almost six times faster than the only other solution, which does not accommodate conflict but can always produce a result when required. Two other experiments based on ordered search show that, on average, the Troop solution returns a position that is over twice as accurate as the other solutions which do not accommodate conflict but always produce a result when required. This work lays the foundation for more research in algorithms that utilize time-accuracy trade-offs consistent with our approach. / This material is based upon work supported by the National Science Foundation under Grant No. 2204780. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. / Master of Science / New types of brain-inspired computer architectures and programming models are needed to break barriers that hinder traditional methods in computer parallelism as well as to understand better the phenomenon by which intelligence emerges from the structure of the human brain. Traditional research in the field of Artificial Intelligence is focused on developing complex programs based on simulating low-level models of the brain such as artificial neural networks. The most advanced of these methods are processed on large supercomputers that use vast amounts of power and have unlimited amounts of time to process a task producing a single result. By contrast, the human brain is relatively small and uses very little power. Furthermore, it can use relatively few experiences to make very quick and inaccurate but necessary decisions to survive in unpredictable environments. But the brain can produce many different and conflicting decisions to the same problem. Given more time, the human brain can use higher levels of thinking located in different parts of the brain to produce better decisions. Thus, we observe that intelligence requires the ability to handle conflicting answers to the same problem. From a highlevel perspective, this means different and independent structures of the brain can simultaneously produce conflicting answers that solve the same problem. We further observe that, unlike traditional computer programs, the brain constantly interacts with the physical world, where different circumstances within the environment force the best available decision to be carried out. Based on these observations, this research introduces novel approaches that we collectively reference as the Troop meta-approach to develop computer architectures that solve real-world problems, such as maze solving.
This research demonstrates the approaches by first introducing scenarios inspired by humans solving problems in environments where unforeseeable events occur that force decisions to be made that are not the most accurate but necessary not to fail the overall objective. For example, military and law enforcement trainees use square mazes to prepare for unpredictable environments. When a threat presents itself, if a soldier or officer does not react to a circumstance in time, their failure may be fatal. To demonstrate that our approaches are feasible, this research then presents three experiments based on the problems of the scenarios and uses the Troop meta-approach to solve each one. Across three experiments, on average, the computer architectures and related algorithms developed using the Troop meta-approach solve mazes or search databases while responding to unpredictable real-world events faster or more accurately than traditional architectures and algorithm pairs that do not handle simultaneous decisions that conflict. This work lays the foundation for more research in methods and computer architectures that utilize multiple conflicting decisions.
|
255 |
Malleable Contextual Partitioning and Computational DreamingBrar, Gurkanwal Singh 20 January 2015 (has links)
Computer Architecture is entering an era where hundreds of Processing Elements (PE) can be integrated onto single chips even as decades-long, steady advances in instruction, thread level parallelism are coming to an end. And yet, conventional methods of parallelism fail to scale beyond 4-5 PE's, well short of the levels of parallelism found in the human brain. The human brain is able to maintain constant real time performance as cognitive complexity grows virtually unbounded through our lifetime. Our underlying thesis is that contextual categorization leading to simplified algorithmic processing is crucial to the brains performance efficiency. But, since the overheads of such reorganization are unaffordable in real time, we also observe the critical role of sleep and dreaming in the lives of all intelligent beings. Based on the importance of dream sleep in memory consolidation, we propose that it is also responsible for contextual reorganization. We target mobile device applications that can be personalized to the user, including speech, image and gesture recognition, as well as other kinds of personalized classification, which are arguably the foundation of intelligence. These algorithms rely on a knowledge database of symbols, where the database size determines the level of intelligence. Essential to achieving intelligence and a seamless user interface however is that real time performance be maintained. Observing this, we define our chief performance goal as: Maintaining constant real time performance against ever increasing algorithmic and architectural complexities. Our solution is a method for Malleable Contextual Partitioning (MCP) that enables closer personalization to user behavior. We conceptualize a novel architectural framework, the Dream Architecture for Lateral Intelligence (DALI) that demonstrates the MCP approach. The DALI implements a dream phase to execute MCP in ideal MISD parallelism and reorganize its architecture to enable contextually simplified real time operation. With speech recognition as an example application, we show that the DALI is successful in achieving the performance goal, as it maintains constant real time recognition, scaling almost ideally, with PE numbers up to 16 and vocabulary size up to 220 words. / Master of Science
|
256 |
Design, fabrication, and testing of a hybrid vacuum-electric actuated robotic armPeng, Zeyuan January 2024 (has links)
his thesis presents the design, fabrication, and testing of a robotic arm that is inherently safe, lightweight and affordable. The arm’s three joints are driven by novel hybrid vacuum-electric actuators that each combine origami-inspired soft pneumatic actuators (OSPAs) with a DC motor. The arm is a type of collaborative robot, or cobot, that is suitable for low payload, low speed applications.
The OSPA was redesigned in the first stage of the research. In particular, the new endcaps are 59% shorter than the previous design. This made the actuators more compact and increased their stroke-to-length ratio. Next, the OSPA fabrication process was significantly changed. The heating of the heat shrink tubing was changed from immersion in boiling water to heating with a heat gun, and a motorized stand with several assisting parts was developed. These changes improved the consistency of the fabrication, reduced the skills required, and improved the safety.
The joints of the arm and its structural components were designed next. The rotation of each joint is achieved by connecting multiple OSPAs to custom-made pulleys using cables and connecting a DC motor in parallel using a timing belt. Joint 2, the shoulder joint, had to produce the largest torque. This was accomplished by applying optimization methods to design a variable-radius pulley. The prototype arm utilized laser-cut acrylic and 3D printed components to keep its cost and weight low. Finally, after a simple pressure control system was developed, the prototype arm’s performance was extensively tested. The joints’ ranges of motion, velocities, accelerations, and blocked torques are tested at multiple pressures and motor currents, and the results discussed. The thesis concludes with a summary of the research’s achievements and limitations, and recommendations for future improvements to the robotic arm’s design. / Thesis / Master of Applied Science (MASc) / This thesis presents the design, fabrication, and testing of a robotic arm that is inherently safe, lightweight and affordable. The arm’s three joints are driven by novel actuators that each combine soft pneumatic actuators (powered by vacuum pressure) with a DC motor. The arm is suitable for low payload, low speed applications.
First, the pneumatic actuators were redesigned to make them more compact. Next, their fabrication process was changed to improve the consistency of the results, reduce the skills required, and improve the safety. The joints of the arm and its structural components were then designed. To produce the torque required for the shoulder joint, optimization methods were used to create a variable-radius pulley. The prototype arm utilized laser-cut acrylic and 3D-printed components to keep its cost and weight low. Finally, after a simple pressure control system was developed, the prototype arm’s performance was extensively tested.
|
257 |
Modelo y desarrollo de un sistema de gestión óptima para una microrred empleando algoritmos bio-inspiradosÁguila León, Jesús 14 September 2023 (has links)
Tesis por compendio / [ES] Las fuentes de energía renovable (ER) permiten una alta disgregación, por lo que hacen posible generar la energía que se utilizará en el mismo sitio de su aprovechamiento. Esto favorece un cambio en la estructura de las redes eléctricas, permitiendo pasar de un esquema de generación centralizado a un esquema distribuido. Sin embargo, las fuentes de ER son altamente dependientes de las condiciones medioambientales como la radiación solar, la nubosidad, el viento, entre otros, por lo que lograr un sistema de generación basado en energías renovables es un reto en la actualidad. Los sistemas de generación que integran fuentes renovables tienen que ser capaces de establecer estrategias de control y gestión de la energía que para hacer un uso eficiente de ella e intentar cubrir la demanda de energía de forma óptima al combinar más de un tipo de fuente y sistema de almacenamiento, siendo posible operar de manera aislada o conectada a la red eléctrica. En la actualidad es de interés el estudio, desarrollo e implementación de sistemas gestores de la energía (SGE) para microrredes eléctricas híbridas, que permitan aumentar su eficiencia, fiabilidad, y disminuir los costes de instalación, operación y mantenimiento. Diversos estudios de investigación han probado múltiples estrategias, desde SGE basados en reglas, algoritmos comparativos, controladores clásicos, y en años recientes, la integración de algoritmos de optimización bio-inspirados e inteligencia artificial. Estos algoritmos han mostrado ser una alternativa interesante a las técnicas clásicas para la solución de problemas de optimización y control en diversos problemas de ingeniería, su aplicación en el ámbito de las microrredes sigue en estudio y en ello se basa este trabajo de investigación. Los algoritmos bio-inspirados se fundamentan en imitar matemáticamente los mecanismos y estrategias que la naturaleza ha implementado a lo largo de millones de años para lograr un equilibrio en su funcionamiento, por ejemplo, imitando el cómo las aves vuelan en parvada buscando alimento, o como las hormigas y los lobos siguen patrones para la búsqueda de su alimento, o como las especies llevan a cabo mecanismos de cruce con el objetivo de mejorar su raza haciéndolas una especie optimizada y mejorando su supervivencia. Por tanto, se puede hacer una analogía con los sistemas artificiales para la mejora de controladores y diseño de sistemas en microrredes eléctricas.
En este trabajo de investigación se muestra el modelo y desarrollo de un sistema de gestión óptima para una microrred empleando algoritmos bio-inspirados con el objetivo de mejorar su desempeño, partiendo desde el control primario, con la mejora de los convertidores de potencia, hasta el control terciario con las transacciones energéticas de la microrred. Se exploran diversos algoritmos, evaluando su desempeño. Los resultados para las diferentes etapas de esta investigación se encuentran plasmados en cuatro diferentes publicaciones científicas que se detallan en el Capítulo 2 del presente documento, donde se explica la metodología y los principales resultados y hallazgos para cada una de ellas. / [CA] Les fonts d'energia renovables (ER) permeten una alta desagregació, pel que fan possible generar l'energia que s'utilitzarà en el mateix lloc del seu aprofitament. Això afavoreix un canvi en l'estructura de les xarxes elèctriques, permetent passar d'un esquema de generació centralitzat a un esquema distribuït. No obstant, les fonts d'ER són altament dependents de les condicions mediambientals com la radiació solar, la nuvolositat, el vent, entre altres; pel que aconseguir un sistema de generació basat en energies renovables és un repte. Els sistemes de generació que integren energies renovables han de ser capaços de: establir estratègies de control i gestió de l'energia que es genera per fer un ús eficient d'ella i intentar cobrir la demanda d'energia de la millor manera possible al combinar més d'un tipus de font d'energia, i sistemes d'emmagatzemament. Aquest esquema es coneix com a microxarxa elèctrica, la qual és capaç d'operar de manera aïllada de la xarxa elèctrica principal, o de manera interconnectada.
Actualment s'està interessant en l'estudi, desenvolupament i implementació de sistemes gestors de l'energia (SGE) per a microxarxes elèctriques híbrides, que permeten augmentar la seua eficiència, fiabilitat i reduir els costos de la seua instal·lació i d'operació i manteniment. S'han provat múltiples estratègies, des de SGE basats en regles, algorismes comparatius, controladors clàssics i, en anys recents, la integració d'algorismes d'optimització bio-inspirats i intel·ligència artificial. Aquests algorismes han demostrat ser una alternativa interessant a les tècniques clàssiques per a la solució de problemes d'optimització i control en diversos problemes d'enginyeria, la seua aplicació en l'àmbit de les microxarxes continua en estudi. Els algorismes bio-inspirats es basen en imitar matemàticament els mecanismes i estratègies que la Natura ha implementat al llarg de milions d'anys per aconseguir equilibri en el seu funcionament, per exemple, imitant com les aus volen en ramat buscant menjar, o com les formigues i els llops segueixen patrons per a la recerca del seu menjar, o com les espècies porten a terme mecanismes de creuament amb mira a millorar la seua raça fent-les una espècie més apta per a la supervivència;, el qual es pot fer una analogia a sistemes artificials per a la millora de controladors i disseny de sistemes en microxarxes elèctriques.
En aquest treball de recerca es mostra el model i desenvolupament d'un sistema de gestió òptima per a una microxarxa emprant algorismes bio-inspirats amb l'objectiu de millorar el seu rendiment, partint des del control primari, amb la millora dels convertidors de potència, fins al control terciari amb les transaccions energètiques de la microxarxa. S'exploren diversos algorismes, avaluant el seu rendiment. Els resultats per a les diferents etapes d'aquesta recerca es troben plasmats en quatre diferents publicacions científiques que es detallen al Capítol 2 del present document, on s'explica la metodologia i els principals resultats i troballes per a cada una d'elles. / [EN] Renewable energy sources (RES) allow for high disaggregation, making it possible to generate energy at the site of its use. This favors a change in the structure of electrical grids, allowing for a transition from a centralized generation scheme to a distributed scheme. However, RES are highly dependent on environmental conditions such as solar radiation, cloudiness, wind, among others, making the creation of a renewable energy generation system a challenge. Generation systems that integrate renewable energies must be able to establish control and energy management strategies to make efficient use of the energy generated and try to meet the energy demand in the best possible way by combining more than one type of energy source and storage systems. This scheme is known as a microgrid, which is capable of operating independently from the main electrical grid or interconnecting with it.
Currently, the study, development, and implementation of energy management systems (EMS) for hybrid microgrids are of interest in order to increase their efficiency, reliability, and reduce installation, operation, and maintenance costs. Multiple strategies have been tested, including rule-based EMS, comparative algorithms, classical controllers, and in recent years, the integration of bio-inspired optimization algorithms and artificial intelligence. These algorithms have shown to be an interesting alternative to classical techniques for solving optimization and control problems in various engineering problems, although their application in the field of microgrids is still under study. Bio-inspired algorithms are based on mathematically imitating the mechanisms and strategies that Nature has implemented over millions of years to achieve balance in its operation, for example, by imitating how birds fly in flocks in search of food, or how ants and wolves follow patterns to search for food, or how species carry out crossing mechanisms in order to improve their breed and make them more suitable for survival; in other words, they are based on how Nature optimizes its resources to prosper. Therefore, an analogy can be made with artificial systems for improving controllers and designing systems in microgrids.
In this research work, the model and development of an optimal management system for a microgrid using bio-inspired algorithms is presented with the aim of improving its performance, starting from primary control, with the improvement of power converters, to tertiary control with the energy transactions of the microgrid. Various algorithms are explored, evaluating their performance. The results for the different stages of this research are reflected in four different scientific publications that are detailed in Chapter 2 of this document, where the methodology and main results and findings for each of them are explained. / The authors wish to acknowledge the National Council of Science and Technology of Mexico
(CONACYT) for funding this work through the Ph.D. scholarship number 486670. The authors
would also thank the Institute of Energy Engineering of the Polytechnic University of
Valencia, Spain, and the Department of Water and Energy Studies of the University of
Guadalajara, Mexico, for all their support and collaboration. This study has also been supported by Food and Agriculture Organization of the United Nations through the project “Design of a Hybrid Renewable Microgrid System”. / Águila León, J. (2023). Modelo y desarrollo de un sistema de gestión óptima para una microrred empleando algoritmos bio-inspirados [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/196747 / Compendio
|
258 |
Optimal navigation, control and simulation of electrified and unmanned ground vehicles with bio-inspired and optimization approachesTaoudi, Amine 13 August 2024 (has links) (PDF)
In recent years, significant progress has been made in autonomous robotics and the electrification of transportation, highlighting the growing importance of automation in daily life. Ensuring the safety and sustainability of automated systems necessitates the integration of intelligent algorithms capable of making astute decisions in uncertain circumstances. Autonomous robots possess considerable potential for efficiently performing intricate tasks, but this potential can only be unlocked through intelligent algorithms. Moreover, enhancing the energy efficiency of transportation systems yields extensive benefits for the environment, economy, and society at large. Addressing the urgent challenges of climate change and resource depletion necessitates prioritizing energy efficiency in transportation to construct a more resilient and equitable future. This research delves into the development of bio-inspired neural dynamics, nature-inspired swarm intelligence, fuzzy logic, heuristic algorithms, and optimization techniques for optimal control and navigation of electrified and unmanned ground vehicles. Drawing inspiration from biological systems, this research aims to enhance the performance of robots in dynamic and unstructured environments. The approach encompasses a hybrid bio-inspired method, leveraging the mathematical model of a biological neural system's membrane to facilitate smooth trajectory tracking and bounded velocities for a differential drive robot. Additionally, integration of a Leader-Slime Mold Algorithm (L-SMA) for global path optimization and a modified velocity obstacle (MVO) for local motion planning is pursued. A heuristic algorithm is also devised to enhance decision-making in uncertain and dynamic environments by coordinating actions among the L-SMA path planner, the MVO local motion planner, and the enhanced bio-inspired tracking controller. Furthermore, a real-time optimal predictive controller is proposed to address the energy management challenges of electrified vehicles while improving driveability and comfort. This predictive controller employs a linear parameter-varying model of an electrified vehicle, a custom-designed adaptive cost function, and fuzzy logic to adapt a subset of cost function weights. The integration of fuzzy logic and the adaptive predictive controller yields a convex optimization problem solved in real-time using an active-set solver. To further enhance the energy efficiency of the electrified vehicle, a particle swarm optimization enhanced model predictive controller is suggested as an adaptive cruise controller with superior energy efficiency and safety in vehicle-following scenarios. Through these integrated approaches, the aim is to advance the capabilities of autonomous robotics and electrified transportation systems, thereby contributing to safer, more efficient, and sustainable mobility solutions.
|
259 |
Bio-inspired structured composites for load-bearing bone graft substitutionGalea, Laetitia 21 May 2015 (has links) (PDF)
Natural composites, in particular nacre, often combine high strength and toughness thanks to highly ordered architectures and controlled geometries of the reinforcement components. However, combining strength, toughness and resorbability in synthetic materials remains a challenge in particular in the field of bone graft substitutes. In the present study, calcium phosphate-(CaP-)based materials with designed architectures inspired from natural composite materials were achieved. CaP platelets obtained by precipitation in organic medium were first aligned in chitosan matrices by solvent casting in ambient conditions. Efficient strengthening was obtained with 15 vol-% ceramic, reaching cortical bone strength (150 MPa) and preserving good ductility (5 % deformation). In a weak magnetic field, high spatial arrangement without percolation was maintained up to 20 vol-%. With directional freezing, good alignment of the platelets could be pushed up to 50 vol-%. In parallel, in situ recrystallization of CaP blocks in hydrothermal conditions led to hierarchical structures. The strength and the work-of-fracture were enhanced (300%) thanks to a change of failure mode.
|
260 |
Pattern recognition with spiking neural networks and the ROLLS low-power online learning neuromorphic processorTernstedt, Andreas January 2017 (has links)
Online monitoring applications requiring advanced pattern recognition capabilities implemented in resource-constrained wireless sensor systems are challenging to construct using standard digital computers. An interesting alternative solution is to use a low-power neuromorphic processor like the ROLLS, with subthreshold mixed analog/digital circuits and online learning capabilities that approximate the behavior of real neurons and synapses. This requires that the monitoring algorithm is implemented with spiking neural networks, which in principle are efficient computational models for tasks such as pattern recognition. In this work, I investigate how spiking neural networks can be used as a pre-processing and feature learning system in a condition monitoring application where the vibration of a machine with healthy and faulty rolling-element bearings is considered. Pattern recognition with spiking neural networks is investigated using simulations with Brian -- a Python-based open source toolbox -- and an implementation is developed for the ROLLS neuromorphic processor. I analyze the learned feature-response properties of individual neurons. When pre-processing the input signals with a neuromorphic cochlea known as the AER-EAR system, the ROLLS chip learns to classify the resulting spike patterns with a training error of less than 1 %, at a combined power consumption of approximately 30 mW. Thus, the neuromorphic hardware system can potentially be realized in a resource-constrained wireless sensor for online monitoring applications.However, further work is needed for testing and cross validation of the feature learning and pattern recognition networks.i
|
Page generated in 0.0522 seconds