• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 10
  • 6
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 68
  • 68
  • 16
  • 16
  • 11
  • 10
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Simulation and Optimization of Mechanical Alloying Using the Event-Driven Method

Barahona, Javier 30 November 2011 (has links)
Mechanical Alloying is a manufacturing process that produces alloys by cold welding of powders. Usually, a vial containing both the powder and steel balls is agitated. Due to impact between the balls and balls and the vial, the powder is mechanically deformed, crushed, and mixed at nano-scales. In this thesis, a numerical model is developed to simulate the dynamics of the vial and the grinding balls of the SPEX 8000 ball milling device, a standardized equipment in both industrial and academic investigations of ball milling. The numerical model is based on the Event Driven Method, typically used to model granular flows. The method implemented is more efficient than the discrete element method used previously to study ball milling dynamics. The numerical tool obtained is useful for scale-up and optimization of mechanical alloying of various materials. An optimization study is presented for the SPEX 8000.
2

Simulation and Optimization of Mechanical Alloying Using the Event-Driven Method

Barahona, Javier 30 November 2011 (has links)
Mechanical Alloying is a manufacturing process that produces alloys by cold welding of powders. Usually, a vial containing both the powder and steel balls is agitated. Due to impact between the balls and balls and the vial, the powder is mechanically deformed, crushed, and mixed at nano-scales. In this thesis, a numerical model is developed to simulate the dynamics of the vial and the grinding balls of the SPEX 8000 ball milling device, a standardized equipment in both industrial and academic investigations of ball milling. The numerical model is based on the Event Driven Method, typically used to model granular flows. The method implemented is more efficient than the discrete element method used previously to study ball milling dynamics. The numerical tool obtained is useful for scale-up and optimization of mechanical alloying of various materials. An optimization study is presented for the SPEX 8000.
3

Simulation and Optimization of Mechanical Alloying Using the Event-Driven Method

Barahona, Javier 30 November 2011 (has links)
Mechanical Alloying is a manufacturing process that produces alloys by cold welding of powders. Usually, a vial containing both the powder and steel balls is agitated. Due to impact between the balls and balls and the vial, the powder is mechanically deformed, crushed, and mixed at nano-scales. In this thesis, a numerical model is developed to simulate the dynamics of the vial and the grinding balls of the SPEX 8000 ball milling device, a standardized equipment in both industrial and academic investigations of ball milling. The numerical model is based on the Event Driven Method, typically used to model granular flows. The method implemented is more efficient than the discrete element method used previously to study ball milling dynamics. The numerical tool obtained is useful for scale-up and optimization of mechanical alloying of various materials. An optimization study is presented for the SPEX 8000.
4

Simulation and Optimization of Mechanical Alloying Using the Event-Driven Method

Barahona, Javier January 2011 (has links)
Mechanical Alloying is a manufacturing process that produces alloys by cold welding of powders. Usually, a vial containing both the powder and steel balls is agitated. Due to impact between the balls and balls and the vial, the powder is mechanically deformed, crushed, and mixed at nano-scales. In this thesis, a numerical model is developed to simulate the dynamics of the vial and the grinding balls of the SPEX 8000 ball milling device, a standardized equipment in both industrial and academic investigations of ball milling. The numerical model is based on the Event Driven Method, typically used to model granular flows. The method implemented is more efficient than the discrete element method used previously to study ball milling dynamics. The numerical tool obtained is useful for scale-up and optimization of mechanical alloying of various materials. An optimization study is presented for the SPEX 8000.
5

Scalable event-driven modelling architectures for neuromimetic hardware

Rast, Alexander Douglas January 2011 (has links)
Neural networks present a fundamentally different model of computation from the conventional sequential digital model. Dedicated hardware may thus be more suitable for executing them. Given that there is no clear consensus on the model of computation in the brain, model flexibility is at least as important a characteristic of neural hardware as is performance acceleration. The SpiNNaker chip is an example of the emerging 'neuromimetic' architecture, a universal platform that specialises the hardware for neural networks but allows flexibility in model choice. It integrates four key attributes: native parallelism, event-driven processing, incoherent memory and incremental reconfiguration, in a system combining an array of general-purpose processors with a configurable asynchronous interconnect. Making such a device usable in practice requires an environment for instantiating neural models on the chip that allows the user to focus on model characteristics rather than on hardware details. The central part of this system is a library of predesigned, 'drop-in' event-driven neural components that specify their specific implementation on SpiNNaker. Three exemplar models: two spiking networks and a multilayer perceptron network, illustrate techniques that provide a basis for the library and demonstrate a reference methodology that can be extended to support third-party library components not only on SpiNNaker but on any configurable neuromimetic platform. Experiments demonstrate the capability of the library model to implement efficient on-chip neural networks, but also reveal important hardware limitations, particularly with respect to communications, that require careful design. The ultimate goal is the creation of a library-based development system that allows neural modellers to work in the high-level environment of their choice, using an automated tool chain to create the appropriate SpiNNaker instantiation. Such a system would enable the use of the hardware to explore abstractions of biological neurodynamics that underpin a functional model of neural computation.
6

Cooperative control for multi-agent persistent monitoring problems

Zhou, Nan 04 June 2019 (has links)
In persistent monitoring tasks, cooperating mobile agents are used to monitor a dynamically changing environment that cannot be fully covered by a stationary team of agents. The exploration process leads to the discovery of various "points of interest" (targets) to be perpetually monitored. Through an optimal control approach, the first part of this dissertation shows that in a one-dimensional mission space the solution can be reduced to a simpler parametric problem. The behavior of agents under optimal control is described by a hybrid system which can be analyzed using Infinitesimal Perturbation Analysis (IPA) to obtain an on-line solution. IPA allows the modeling of virtually arbitrary stochastic effects in target uncertainty and its event-driven nature renders the solution scalable in the number of events rather than the state space. The second part of this work extends the results of the one-dimensional persistent monitoring problem to a two-dimensional space with constrained agent mobility. Under a general graph setting, the properties of the one-dimensional optimal control solution are largely inherited. The solution involves the design of agent trajectories defined by both the sequence of nodes to be visited and the amount of time spent at each node. A class of distributed threshold-based parametric controllers is proposed to reduce the computational complexity. These parameters are optimized through an event-driven IPA gradient-based algorithm and yield optimal controllers within this family of threshold-based policies. The performance of the threshold-based parametric controller is close to that of the optimal controller derived through dynamic programming and its computational complexity is smaller by orders of magnitude. Although effective, the aforementioned optimal controls are established on the assumption that agents are all connected via a centralized controller which is energy-consuming and unreliable in adversarial environments. The third part of this work extends the previous controls by developing decentralized controllers which distribute functionality to the agents so that each one acts upon local information and sparse communication with neighbors. The complexity of decentralization for persistent monitoring problems is significant given agent mobility and the overall time-varying graph topology. Conditions are identified and a decentralized framework is proposed under which the centralized solution can be exactly recovered in a decentralized event-driven manner based on local information -- except for one event requiring communication from a non-neighbor agent.
7

Simulations of Organic Solar Cells with an Event-Driven Monte Carlo Algorithm

Robbiano, Vincent P. 15 August 2011 (has links)
No description available.
8

Comparing Monolithic and Event-Driven Architecture when Designing Large-scale Systems / Jämföra monolitisk och event-driven arkitektur vid design av storskaliga system

Eder, Felix January 2021 (has links)
The way the structure of systems and programs are designed is very important. When working with smaller groups of systems, the chosen architecture does not affect the performance and efficiency greatly, but as these systems increase in size and complexity, the choice of architecture becomes a very important one. Problems that can arise when the complexity of software scales up are waiting for data accesses, long sequential executions and potential loss of data. There is no single, optimal software architecture, as there are countless different ways to design programs, but it is interesting to look at which architectures perform the best in terms of execution time when handling multiple bigger systems and large amounts of data. In this thesis, a case called "The Income Deduction" will be implemented in a monolithic and an event-driven architectural style and then be put through three different scenarios. The monolithic architecture was chosen due to its simplicity and popularity when constructing simpler programs and systems, while the event-driven architecture was chosen due to its theoretical benefits of removing sequential communicating between systems and thus reduce the time systems spend waiting for each other to respond. The main research question to answer is what the main benefits and drawbacks are when building larger systems with an event-driven architectural style. Additional research questions include how the architecture affects the organisation’s efficiency and cooperation between different teams, as well as how the security of data is handled. The two implementations where put through three different scenarios within the case, measuring execution time, number of HTTP requests sent, database accesses and events emitted. The results show that the event-driven architecture performed 9.4% slower in the first scenario and 0.5% slower in the second scenario. In the third scenario the event-driven architecture performed 49.0% faster than the monolithic implementation, finishing the scenario in less than half the amount of time. The monolithic implementation generally performed well in the simpler scenarios 1 and 2, where the systems had fewer integrations to each other. In these cases it is the preferred solution since it is easier to design and implement. The event-driven solution did perform much better in the more complex scenario 3, where a lot of systems and integrations were involved, since it could remove certain connections between systems. Lastly, this thesis also discusses the sustainability and ethics of the study, as well as the limitations of the research and potential future work. / Strukturen som system och program designas efter är väldigt viktigt. När en arbetar med mindre grupper av system så kommer den valda arkitekturen inte att påverka prestandan mycket. Men när dessa system växter i storlek och komplexitet så kommer valet av arkitektur vara väldigt viktigt. Problem som kan uppstå när mjukvarukomplexiteten ökar är väntandet på dataaccesser, långa sekventiella exekveringar och potentiell förlust av data. Det finns ingen optimal mjukvaruarkitektur, det finns oräkneligt många sätt att designa program. Det är intressant att kolla på vilka arkitekturer som preseterar bäst sätt till exekveringstid när en hanterar ett flertal större system och stora mängder data. I den här avhandlingen kommer ett fall, kallat "Ingångsavdraget", att implementeras i en monolitisk och en event-driven arkitekturell stil och sedan köras igenom tre olika scenarion. Den monolitiska arkitekturen var vald på grund av dess enkelhet och populäritet vid utveckling av enklar program och system. Den event-drivna arkitekturen valdes på grund av vissa teoretiska fördelar, så som att kunna undvika sekventiell kommunikation mellan systemen och därmed reducera tiden som systemen väntar på svar från varandra. Den huvudsakliga forskningsfrågan som ska besvaras är vad de största fördelarna och nackdelarna är när man bygger större system med en event-driven arkitekturell stil. Andra forskningsfrågor inkludera hur arkitekturen påverkar effektiviteten hos en organisation och samarbetet mellan olika team, samt hur datasäkerheten hanteras. De två implementationerna sattes igång tre olika scenarion inom fallet, där exekveringstid, antal HTTP-anrop skickade, databasaccesser och event skickad mättes. Resultaten visar att den event-drivna arkitekturen presterade 9.4% långsamare i det första scenariot och 0.5% långsamare i det andra scenariot. I det tredje scenariot presterade den event-drivna lösningen 49.0% snabbare än den monolitiska lösningen och avslutade därmed scenariot under hälften av tiden. Den monolitiska implementationen presterade generellt väl under de simplare scenarion 1 och 2, där systemen hade färre integrationer till varandra. I dessa fallen så är den den föredragna lösningen eftersom det är lättare att designa och implementera. Den event-drivna lösningen presterade mycket bättre i det mer komplexa scenario 3, där många system och integrationer var inblandade, eftersom den kunde ta bort vissa kopplingar mellan system. Slutligen så diskuteras även hållbarhet och etik i studien, samt begränsningarna av forskningen och potentiellt framtida arbete.
9

Unraveling Microservices : A study on microservices and its complexity

Romin, Philip January 2020 (has links)
Microservices is one of the most commonly used buzzword of the systems architecture industry and is being adopted by several of the world’s largest technology companies such as Netflix, Uber and Amazon. The architecture which embraces splitting up your system in smaller independent units is an extension of the service-oriented architecture and an opponent of the monolithic architecture. Being a top buzzword and promises of extreme scalability has spiked the interest for microservices, but unlike the relatively simple monolithic architecture the complexity of microservices creates a new set of obstacles. This work sheds a light on these issues and implements solutions for some of the most frequent problems using a case study. The study shows that while microservices can help reduce the inner complexity of a system, it greatly increases the outer complexity and creates the need for a variety of tools aimed at distributed systems. It also concludes that communication and data storage are two of the most frequently occurring issues when developing microservices with the most difficult one being how you reason with and structure your data, especially for efficient queries across microservices. / Microservices eller så kallade mikrotjänster är ett ofta förekommande buzzword inom systemarkitektur och nyttjas av flera teknikjättar som exempelvis Netflix, Uber och Amazon. Arkitekturen som bygger på att dela upp sina system i mindre oberoende delar är en utbyggnad av den tjänstorienterade arkitekturen och numera motståndare till den klassiska monolitiska arkitekturen. En plats högt upp på trendlistan och lovord om extrem skalbarhet har gjort att intresset för mikrotjänster är enormt, men till skillnad från den relativt simpla monolitiska arkitekturen skapar komplexiteten hos mikrostjänster en rad nya hinder. Det här arbetet belyser dessa hinder och implementerar även lösningar för de vanligaste förekommande problemen med hjälp av en fallstudie. Resultatet visar att även fast en mikrotjänstarkitektur kan minska systemets interna komplexitet så leder det till en markant ökning av systemets yttre komplexitet och det skapas ytterligare behov av en mängd olika verktyg och tjänster designade för distribuerade system. Studien visar också att de två mest förekommande problemen vid utveckling av en mikrotjänstarkitektur är kommunikation och datalagring där hantering och struktur av data är den mest komplicerade och kräver mycket kunskap, speciellt för att skapa effektiva datasökningar som sträcker sig över flera mikrotjänster.
10

DESCRIPTION AND ANALYSIS OF A FLEXIBLE HARDWARE ARCHITECTURE FOR EVENT-DRIVEN DISTRIBUTED SENSOR NETWORK NODES

Davis, Jesse, Kyker, Ron, Berry, Nina 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / A particular engineering aspect of distributed sensor networks that has not received adequate attention is the system level hardware architecture of the individual nodes of the network. A novel hardware architecture based on an idea of task specific modular computing is proposed to provide for both the high flexibility and low power consumption required for distributed sensing solutions. The power consumption of the architecture is mathematically analyzed against a traditional approach, and guidelines are developed for application scenarios that would benefit from using this new design.

Page generated in 0.0387 seconds