• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 58
  • 16
  • 12
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 121
  • 121
  • 49
  • 22
  • 20
  • 19
  • 18
  • 15
  • 14
  • 13
  • 12
  • 12
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A Robust Architecture For Human Language Technology Systems

Stanley, Theban 05 August 2006 (has links)
Early human language technology systems were designed in a monolithic fashion. As these systems became more complex, this design became untenable. In its place, the concept of distributed processing evolved wherein the monolithic structure was decomposed into a number of functional components that could interact through a common protocol. This distributed framework was readily accepted by the research community and has been the cornerstone for the advancement in cutting edge human language technology prototype systems.The Defense Advanced Research Program Agency (DARPA) Communicator program has been highly successful in implementing this approach. The program has fueled the design and development of impressive human language technology applications. Its distributed framework has offered numerous benefits to the research community, including reduced prototype development time, sharing of components across sites, and provision of a standard evaluation platform. It has also enabled development of client-server applications with complex inter-process communication between modules. However, this latter feature, though beneficial, introduces complexities which reduce overall system robustness to failure. In addition, the ability to handle multiple users and multiple applications from a common interface is not innately supported. This thesis describes the enhancements to the original Communicator architecture that address robustness issues and provide a multiple multi-user application environment by enabling automated server startup, error detection and correction. Extensive experimentation and analysis were performed to measure improvements in robustness due to the enhancements to the DARPA architecture. A 7.2% improvement in robustness was achieved on the address querying task, which is the most complex task in the human language technology system.
12

LOW POWER FPGA DESIGN TECHNIQUES FOR EMBEDDED SYSTEMS

TIWARI, ANURAG 31 May 2005 (has links)
No description available.
13

A Low-latency Consensus Algorithm for Geographically Distributed Systems

Arun, Balaji 15 May 2017 (has links)
This thesis presents Caesar, a novel multi-leader Generalized Consensus protocol for geographically replicated systems. Caesar is able to achieve near-perfect availability, provide high performance - low latency and high throughput compared to the existing state-of-the- art, and tolerate replica failures. Recently, a number of state-of-the-art consensus protocols that implement the Generalized Consensus definition have been proposed. However, the major limitation of these existing approaches is the significant performance degradation when application workload produces conflicting requests. Caesar's main goal is to overcome this limitation by changing the way a fast decision is taken: its ordering protocol does not reject a fast decision for a client request if a quorum of nodes reply with different dependency sets for that request. It only switches to a slow decision if there is no chance to agree on the proposed order for that request. Caesar is able to achieve this using a combination of wait condition and logical time stamping. The effectiveness of Caesar is demonstrated through an evaluation study performed on Amazon's EC2 infrastructure using 5 geo-replicated sites. Caesar outperforms other multi-leader (e.g., EPaxos) competitors by as much as 1.7x in presence of 30% conflicting requests, and single-leader (e.g., Multi-Paxos) by as much as 3.5x. The protocol is also resistant to heavy client loads unlike existing protocols. / Master of Science
14

An Embedded Software Design to Help Asthma Patients Inhale Medication Correctly / En inbäddad programvarudesign för att hjälpa astmapatienter andas in medicin korrekt

Lei, Yuchen January 2022 (has links)
Managing the respiratory diseases could be hard for many patients. Usually patients use the inhaler to administrate medicine on a regular basis. Even though the inhaler guideline is well-accepted, most patients make mistakes. In the recent years, smart inhalers with sensors have shown a great potential of guiding the daily use of the inhaler and better understanding the diseases. KTH MedTech startup Andning Med AB specializes on developing smart add-on hardware device to the inhaler. This thesis work is the continuation of the prototyping of the embedded software for the add-on device. The main goal of the thesis work is to develop a robust software for the hardware device to guide the inhaler use in real time, and collect and manage the inhaler data. To approach the problem, I use the Finite-state machine modelling and the object-oriented programming mindset. After the software development and testing, all the designed functionalities are achieved. The user could be visually guided by the device. The inhaler data could be correctly collected and uploaded to the mobile device. The thesis work could serve as a basis for further embedded software development for the device that will end up in the smart inhaler market in the future. It could also give reference to the similar IoT device development. / Att hantera luftvägssjukdomarna kan vara svårt för många patienter. Vanligtvis använder patienter inhalatorn för att administrera medicin regelbundet. Även om inhalatorns riktlinje är väl accepterad gör de flesta patienter misstag. Under de senaste åren har smarta inhalatorer med sensorer visat en stor potential att vägleda den dagliga användningen av inhalatorn och bättre förstå sjukdomarna. KTH MedTech startup Andning Med AB har specialiserat sig på att utveckla smarta tilläggsutrustning till inhalatorn. Detta examensarbete är en fortsättning på prototypframställningen av den inbäddade programvaran för tilläggsenheten. Huvudmålet med examensarbetet är att utveckla en robust mjukvara för hårdvaruenheten för att styra inhalatoranvändningen i realtid, samt samla in och hantera inhalatordata. För att närma mig problemet använder jag Finite-state maskinmodellering och det objektorienterade programmeringstänket. Efter mjukvaruutveckling och testning uppnås alla designade funktioner. Användaren kan visuellt guidas av enheten. Inhalatordata kunde samlas in korrekt och laddas upp till den mobila enheten. Examensarbetet kan fungera som en grund för ytterligare inbäddad mjukvaruutveckling för enheten som kommer att hamna på marknaden för smarta inhalatorer i framtiden. Det kan också hänvisa till liknande utveckling av IoT-enheter.
15

Mimicking human player strategies in fighting games using game artificial intelligence techniques

Saini, Simardeep S. January 2014 (has links)
Fighting videogames (also known as fighting games) are ever growing in popularity and accessibility. The isolated console experiences of 20th century gaming has been replaced by online gaming services that allow gamers to play from almost anywhere in the world with one another. This gives rise to competitive gaming on a global scale enabling them to experience fresh play styles and challenges by playing someone new. Fighting games can typically be played either as a single player experience, or against another human player, whether it is via a network or a traditional multiplayer experience. However, there are two issues with these approaches. First, the single player offering in many fighting games is regarded as being simplistic in design, making the moves by the computer predictable. Secondly, while playing against other human players can be more varied and challenging, this may not always be achievable due to the logistics involved in setting up such a bout. Game Artificial Intelligence could provide a solution to both of these issues, allowing a human player s strategy to be learned and then mimicked by the AI fighter. In this thesis, game AI techniques have been researched to provide a means of mimicking human player strategies in strategic fighting games with multiple parameters. Various techniques and their current usages are surveyed, informing the design of two separate solutions to this problem. The first solution relies solely on leveraging k nearest neighbour classification to identify which move should be executed based on the in-game parameters, resulting in decisions being made at the operational level and being fed from the bottom-up to the strategic level. The second solution utilises a number of existing Artificial Intelligence techniques, including data driven finite state machines, hierarchical clustering and k nearest neighbour classification, in an architecture that makes decisions at the strategic level and feeds them from the top-down to the operational level, resulting in the execution of moves. This design is underpinned by a novel algorithm to aid the mimicking process, which is used to identify patterns and strategies within data collated during bouts between two human players. Both solutions are evaluated quantitatively and qualitatively. A conclusion summarising the findings, as well as future work, is provided. The conclusions highlight the fact that both solutions are proficient in mimicking human strategies, but each has its own strengths depending on the type of strategy played out by the human. More structured, methodical strategies are better mimicked by the data driven finite state machine hybrid architecture, whereas the k nearest neighbour approach is better suited to tactical approaches, or even random button bashing that does not always conform to a pre-defined strategy.
16

Online Deception Detection Using BDI Agents

Merritts, Richard Alan 01 January 2013 (has links)
This research has two facets within separate research areas. The research area of Belief, Desire and Intention (BDI) agent capability development was extended. Deception detection research has been advanced with the development of automation using BDI agents. BDI agents performed tasks automatically and autonomously. This study used these characteristics to automate deception detection with limited intervention of human users. This was a useful research area resulting in a capability general enough to have practical application by private individuals, investigators, organizations and others. The need for this research is grounded in the fact that humans are not very effective at detecting deception whether in written or spoken form. This research extends the deception detection capability research in that typical deception detection tools are labor intensive and require extraction of the text in question following ingestion into a deception detection tool. A neural network capability module was incorporated to lend the resulting prototype Machine Learning attributes. The prototype developed as a result of this research was able to classify online data as either "deceptive" or "not deceptive" with 85% accuracy. The false discovery rate for "deceptive" online data entries was 20% while the false discovery rate for "not deceptive" was 10%. The system showed stability during test runs. No computer crashes or other anomalous system behavior were observed during the testing phase. The prototype successfully interacted with an online data communications server database and processed data using Neural Network input vector generation algorithms within seconds
17

Generování kódu ze stavového modelu UML / Code Generation from UML State Machine Description

Píš, Ľuboš January 2012 (has links)
This paper discusses the implementation of a suitable algorithm for code generation from UML state machine diagrams. The work includes analysis of state machines described in UML, followed by a description of the input fi le format of the proposed design of the generator and the generator itself. The generator was fully implemented in the work along with other functional requirements. At the end of this thesis is a description of the resulting implementation.
18

Uma estratégia para a minimização de máquinas de estados finitos parciais / An approach to incompletely specified finite state machine minimization

Alberto, Alex Donizeti Betez 22 April 2009 (has links)
Máquinas de Estados Finitos, além de suas inúmeras aplicações, são amplamente utilizadas na Engenharia de Software para modelar especificações de sistemas. Nesses modelos, projetistas podem inserir, inadvertidamente, estados redundantes, ou seja, que exibem o mesmo comportamento. A eliminação desses estados traz diversos benefícios para as atividades que utilizam o modelo, como menor complexidade e menos recursos físicos para implementação. O processo de eliminação desses estados é denominado minimização, e pode ser realizado em tempo polinomial para máquinas completamente especificadas. Por outro lado, a minimização de máquinas parciais, cuja especificação não cobre todo o domínio de entrada, somente pode ser obtida em tempo polinomial com o uso de abordagens não determinísticas, ou seja, trata-se de um problema NP-Completo. Este trabalho apresenta uma estratégia para a minimização de máquinas de estados finitos parciais que faz o uso de heurísticas e otimizações para tornar o processo mais eficiente. Visando mensurar tal ganho de eficiência, foram realizados experimentos, nos quais os tempos de execução de uma implementação do método proposto foram medidos, juntamente com os tempos de implementações de dois outros métodos conhecidos. Os resultados mostraram vantagens significativas de performance para o novo método em relação aos métodos anteriores / Finite State Machines are largely used on Software Engineering to model systems specifications. In these models, designers may inadvertently include redundant states, i.e., states which exhibit the same input/output behavior. The absence of such states brings benefits to the modeling activities, reducing the complexity and taking less physical resources on implementations. The process of eliminating redundant states is known as minimization, and can be accomplished in polynomial time for completely specified machines. On the other hand, the minimization of partially specified machines, i.e., machines which have undefined behavior for some inputs, can only be done in polynomial time when non-deterministic approaches are applied. It is a known NP-Complete problem. This work presents a deterministic approach to minimize incompletely specified Finite State Machines, using heuristics and optimizations to accomplish the task more efficiently. In order to measure the performance improvements, experiments were done, observing the running time of an implementation of the proposed method, along with running times of implementations of two other known methods. The results revealed a significant performance advantage when using the proposed approach
19

Aspects of learning within networks of spiking neurons

Carnell, Andrew Robert January 2008 (has links)
Spiking neural networks have, in recent years, become a popular tool for investigating the properties and computational performance of large massively connected networks of neurons. Equally as interesting is the investigation of the potential computational power of individual spiking neurons. An overview is provided of current and relevant research into the Liquid Sate Machine, biologically inspired artificial STDP learning mechanisms and the investigation of aspects of the computational power of artificial, recurrent networks of spiking neurons. First, it is shown that, using simple structures of spiking Leaky Integrate and Fire (LIF) neurons, a network n(P), can be built to perform any program P that can be performed by a general parallel programming language. Next, a form of STDP learning with normalisation is developed, referred to as STDP + N learning. The effects of applying this STDP + N learning within recurrently connected networks of neurons is then investigated. It is shown experimentally that, in very specific circumstances Anti-Hebbian and Hebbian STDP learning may be considered to be approximately equivalent processes. A metric is then developed that can be used to measure the distance between any two spike trains. The metric is then used, along with the STDP + N learning, in an experiment to examine the capacity of a single spiking neuron that receives multiple input spike trains, to simultaneously learn many temporally precise Input/Output spike train associations. The STDP +N learning is further modified for use in recurrent networks of spiking neurons, to give the STDP + NType2 learning methodology. An experiment is devised which demonstrates that the Type 2 method of applying learning to the synapses of a recurrent network — effectively a randomly shifting locality of learning — can enable the network to learn firing patterns that the typical application of learning is unable to learn. The resulting networks could, in theory, be used to create to simple structures discussed in the first chapter of original work.
20

Scalable state machine replication / Replicação escalável de máquina de estados

Bezerra, Carlos Eduardo Benevides January 2016 (has links)
Redundância provê tolerância a falhas. Um serviço pode ser executado em múltiplos servidores que se replicam uns aos outros, de maneira a prover disponibilidade do serviço em caso de falhas. Uma maneira de implementar um tal serviço replicado é através de técnicas como replicação de máquina de estados (SMR). SMR provê tolerância a falhas, ao mesmo tempo que é linearizável, isto é, clientes não são capazes de distinguir o comportamento do sistema replicado daquele de um sistema não replicado. No entanto, ter um sistema completamente replicado e linearizável vem com um custo, que é escalabilidade – por escalabilidade, queremos dizer que adicionar servidores ao sistema aumenta a sua vazão, pelo menos para algumas cargas de trabalho. Mesmo com uma configuração cuidadosa e usando otimizações que evitam que os servidores executem ações redundantes desnecessárias, em um determinado ponto a vazão de um sistema replicado com SMR não pode ser mais aumentada acrescentando-se servidores; na verdade, adicionar réplicas pode até degradar a sua performance. Uma maneira de conseguir escalabilidade é particionar o serviço e então permitir que partições trabalhem independentemente. Por outro lado, ter um sistema particionado, porém linearizável e com razoavelmente boa performance não é trivial, e esse é o tópico de pesquisa tratado aqui. Para permitir que sistemas escalem, ao mesmo tempo que se garante linearizabilidade, nós propomos as seguinte ideias: (i) Replicação Escalável de Máquina de Estados (SSMR), (ii) Multicast Atômico Otimista (Opt-amcast) e (iii) S-SMR Rápido (Fast-SSMR). S-SMR é um modelo de execução que permite que a vazão do sistema escale de maneira linear com o número de servidores, sem sacrificar consistência. Para reduzir o tempo de resposta dos comandos, nós definimos o conceito de Opt-amcast, que permite que mensagens sejam entregues duas vezes: uma entrega garante ordem atômica (entrega atômica), enquanto a outra é mais rápida, mas nem sempre garante ordem atômica (entrega otimista). A implementação de Opt-amcast que nós propomos nessa tese se chama Ridge, um protocolo que combina baixa latência com alta vazão. Fast-SSMR é uma extensão do S-SMR que utiliza a entrega otimista do Opt-amcast: enquanto um comando é ordenado de maneira atômica, pode-se fazer alguma pré-computação baseado na entrega otimista, reduzindo assim tempo de resposta. / Redundancy provides fault-tolerance. A service can run on multiple servers that replicate each other, in order to provide service availability even in the case of crashes. A way to implement such a replicated service is by using techniques like state machine replication (SMR). SMR provides fault tolerance, while being linearizable, that is, clients cannot distinguish the behaviour of the replicated system to that of a single-site, unreplicated one. However, having a fully replicated, linearizable system comes at a cost, namely, scalability—by scalability we mean that adding servers will always increase the maximum system throughput, at least for some workloads. Even with a careful setup and using optimizations that avoid unnecessary redundant actions to be taken by servers, at some point the throughput of a system replicated with SMR cannot be increased by additional servers; in fact, adding replicas may even degrade performance. A way to achieve scalability is by partitioning the service state and then allowing partitions to work independently. Having a partitioned, yet linearizable and reasonably performant service is not trivial, and this is the topic of research addressed here. To allow systems to scale, while at the same time ensuring linearizability, we propose and implement the following ideas: (i) Scalable State Machine Replication (S-SMR), (ii) Optimistic Atomic Multicast (Opt-amcast), and (iii) Fast S-SMR (Fast-SSMR). S-SMR is an execution model that allows the throughput of the system to scale linearly with the number of servers without sacrificing consistency. To provide faster responses for commands, we developed Opt-amcast, which allows messages to be delivered twice: one delivery guarantees atomic order (conservative delivery), while the other is fast, but not always guarantees atomic order (optimistic delivery). The implementation of Opt-amcast that we propose is called Ridge, a protocol that combines low latency with high throughput. Fast-SSMR is an extension of S-SMR that uses the optimistic delivery of Opt-amcast: while a command is atomically ordered, some precomputation can be done based on its fast, optimistically ordered delivery, improving response time.

Page generated in 0.0534 seconds