• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 66
  • 24
  • 7
  • 6
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 139
  • 139
  • 54
  • 33
  • 26
  • 25
  • 24
  • 23
  • 22
  • 21
  • 19
  • 18
  • 18
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Improving Processes Using Static Analysis Techniques

Chen, Bin 01 February 2011 (has links)
Real-world processes often undergo improvements to meet certain goals, such as coping with changed requirements, eliminating defects, improving the quality of the products, and reducing costs. Identifying and evaluating the defects or errors in the process, identifying the causes of such defects, and validating proposed improvements all require careful analysis of the process.Human-intensive processes, where human contributions require considerable domain expertise and have a significant impact on the success or failure of the overall mission, are of particular concern because they can be extremely complex and may be used in critical, including life-critical, situations. To date, the analysis support for such processes is very limited. If done at all, it is usually performed manually and can be extremely time-consuming, costly and error-prone.There has been considerable success lately in using static analysis techniques to analyze hardware systems, software systems, and manufacturing processes. This thesis explores how such analysis techniques can be automated and employed to effectively analyze life-critical, human-intensive processes. In this thesis, we investigated two static analysis techniques: Finite-State Verification (FSV) and Fault Tree Analysis (FTA). We proposed a process analysis framework that is capable of performing both FSV and FTA on rigorously defined processes. Although evaluated for processes specified in the Little-JIL process definition language, this is a general framework independent of the process definition language. For FSV, we developed a translation-based approach that is able to take advantage of existing FSV tools. The process definition and property to be evaluated are translated into the input model and property representation accepted by the selected FSV tool. Then the FSV tool is executed to verify the model against the property representation. For FTA, we developed a template-based approach to automatically derive fault trees from the process definition. In addition to showing the feasibility of applying these two techniques to processes, much effort has been put on improving the scalability and the usability of the framework so that it can be easily used to analyze complex real-world processes. To scale the analysis, we investigated several optimizations that are able to dramatically reduce the translated models for FSV tools and speed up the verification. We also developed several optimizations for the fault tree derivation to make the generated fault tree much more compact and easier to understand and analyze. To improve the usability, we provided several approaches that make analysis results easier to understand. We evaluated this framework based on the Little-JIL process definition language and employed it to analyze two real-world, human-intensive processes: an in-patient blood transfusion process and a chemotherapy process. The results show that the framework can be used effectively to detect defects in such real-world, human-intensive processes.
12

A finite-state morphological analyzer for Q'eqchi' using Helsinki Finite-State Technology (HFST) and the Giellatekno infrastructure

Christopherson, Cody Scott 08 December 2023 (has links) (PDF)
Finite-state morphological modeling has been used in natural language processing for many years particularly when dealing with lower resource languages. The present study details the development of an open-source finite-state morphological model for the Q'eqchi' Maya language using Helsinki Finite-State Technology (HFST) and the Giellatekno infrastructure. This project represents the first comprehensive morphological analyzer for Q'eqchi' and sets a foundation for future work in data annotation for this language. The resulting transducer consists of 4,439 lexemes, 2,610 states and 9,558 transitions and covers between 75% and 85% of tokens in a Q'eqchi' corpus. The success of this project lays the groundwork for future work in improved automatic corpus annotation in Q'eqchi', as well as suggesting further success in the development of similar utilities for other Mayan languages.
13

LOW POWER FPGA DESIGN TECHNIQUES FOR EMBEDDED SYSTEMS

TIWARI, ANURAG 31 May 2005 (has links)
No description available.
14

Mining Multinode Constraints and Complex Boolean Expressions for Sequential Equivalence Checking

Goel, Neha 13 August 2010 (has links)
Integrated circuit design has progressed significantly over the last few decades. This increasing complexity of hardware systems poses several challenges to the digital hardware verification. Functional verification has become the most expensive and time-consuming task in the overall product development cycle. Almost 70\% of the total verification time is being consumed by design verification and it is projected to worsen further. One of the reasons for this complexity is the synthesis and optimization (automated as well as manual) techniques used to improve performance, area, delay, and other measures have made the final implementation of the design very different from the golden (reference) model. Determining the functional correctness between the reference and implementation using exhaustive simulation can almost always be infeasible. An alternative approach is to prove that the optimized design is functionally equivalent to the reference model, which is known to be functionally correct. The most widely used formal method to perform this process is equivalence checking. The success of combinational equivalence checking (CEC) has contributed to aggressive combinational logic synthesis and optimizations for circuits with millions of logic gates. However, without powerful sequential equivalence checking (SEC) techniques, the potential and extent of sequential optimization is quite limited. In other words, the success of SEC can unleash a plethora of aggressive sequential optimizations that can take circuit design to the next level. Currently, SEC remains extremely difficult compared to CEC, due to the huge search space of the problem. Sequential Equivalence Checking remains a challenging problem, in this thesis we address the problem using efficient learning techniques. The first approach is to mine missing multi-node patterns from the mining database, verify them and add those proved as true during the unbounded SEC framework. The second approach is to mine powerful and generalized Boolean relationships among flip-flops and internal signals in a sequential circuit using a data mining algorithm. In contrast to traditional learning methods, our mining algorithms can extract illegal state cubes and inductive invariants. These invariants can be arbitrary Boolean expressions and can help in pruning a large don't-care space for equivalence checking. The two approaches are complementary to each other in nature. One computes the subset of illegal states that cannot occur in the normal function mode and the other approach mines legal constraints that represent the characteristics of the miter circuit and can never be violated. These powerful relations, when added as new constraint clauses to the original formula, help to significantly increase the deductive power for the SAT engine, thereby pruning a larger portion of the search space. Likewise, the memory required and time taken to solve the SEC problem is alleviated. / Master of Science
15

Outomatiese Setswana lemma-identifisering / Jeanetta Hendrina Brits

Brits, Jeanetta Hendrina January 2006 (has links)
Within the context of natural language processing, a lemmatiser is one of the most important core technology modules that has to be developed for a particular language. A lemmatiser reduces words in a corpus to the corresponding lemmas of the words in the lexicon. A lemma is defined as the meaningful base form from which other more complex forms (i.e. variants) are derived. Before a lemmatiser can be developed for a specific language, the concept "lemma" as it applies to that specific language should first be defined clearly. This study concludes that, in Setswana, only stems (and not roots) can act independently as words; therefore, only stems should be accepted as lemmas in the context of automatic lemmatisation for Setswana. Five of the seven parts of speech in Setswana could be viewed as closed classes, which means that these classes are not extended by means of regular morphological processes. The two other parts of speech (nouns and verbs) require the implementation of alternation rules to determine the lemma. Such alternation rules were formalised in this study, for the purpose of development of a Setswana lemmatiser. The existing Setswana grammars were used as basis for these rules. Therewith the precision of the formalisation of these existing grammars to lemmatise Setswana words could be determined. The software developed by Van Noord (2002), FSA 6, is one of the best-known applications available for the development of finite state automata and transducers. Regular expressions based on the formalised morphological rules were used in FSA 6 to create finite state transducers. The code subsequently generated by FSA 6 was implemented in the lemmatiser. The metric that applies to the evaluation of the lemmatiser is precision. On a test corpus of 1 000 words, the lemmatiser obtained 70,92%. In another evaluation on 500 complex nouns and 500 complex verbs separately, the lemmatiser obtained 70,96% and 70,52% respectively. Expressed in numbers the precision on 500 complex and simplex nouns was 78,45% and on complex and simplex verbs 79,59%. The quantitative achievement only gives an indication of the relative precision of the grammars. Nevertheless, it did offer analysed data with which the grammars were evaluated qualitatively. The study concludes with an overview of how these results might be improved in the future. / Thesis (M.A. (African Languages))--North-West University, Potchefstroom Campus, 2006.
16

Efficient finite-state algorithms for the application of local grammars / Algorithmes performants à états finis pour l'application de grammaires locales / Algoritmos eficientes de estados finitos para la aplicación de gramáticas locales

Sastre Martínez, Javier Miguel 16 July 2011 (has links)
No description available.
17

Outomatiese Setswana lemma-identifisering / Jeanetta Hendrina Brits

Brits, Jeanetta Hendrina January 2006 (has links)
Within the context of natural language processing, a lemmatiser is one of the most important core technology modules that has to be developed for a particular language. A lemmatiser reduces words in a corpus to the corresponding lemmas of the words in the lexicon. A lemma is defined as the meaningful base form from which other more complex forms (i.e. variants) are derived. Before a lemmatiser can be developed for a specific language, the concept "lemma" as it applies to that specific language should first be defined clearly. This study concludes that, in Setswana, only stems (and not roots) can act independently as words; therefore, only stems should be accepted as lemmas in the context of automatic lemmatisation for Setswana. Five of the seven parts of speech in Setswana could be viewed as closed classes, which means that these classes are not extended by means of regular morphological processes. The two other parts of speech (nouns and verbs) require the implementation of alternation rules to determine the lemma. Such alternation rules were formalised in this study, for the purpose of development of a Setswana lemmatiser. The existing Setswana grammars were used as basis for these rules. Therewith the precision of the formalisation of these existing grammars to lemmatise Setswana words could be determined. The software developed by Van Noord (2002), FSA 6, is one of the best-known applications available for the development of finite state automata and transducers. Regular expressions based on the formalised morphological rules were used in FSA 6 to create finite state transducers. The code subsequently generated by FSA 6 was implemented in the lemmatiser. The metric that applies to the evaluation of the lemmatiser is precision. On a test corpus of 1 000 words, the lemmatiser obtained 70,92%. In another evaluation on 500 complex nouns and 500 complex verbs separately, the lemmatiser obtained 70,96% and 70,52% respectively. Expressed in numbers the precision on 500 complex and simplex nouns was 78,45% and on complex and simplex verbs 79,59%. The quantitative achievement only gives an indication of the relative precision of the grammars. Nevertheless, it did offer analysed data with which the grammars were evaluated qualitatively. The study concludes with an overview of how these results might be improved in the future. / Thesis (M.A. (African Languages))--North-West University, Potchefstroom Campus, 2006.
18

An Embedded Software Design to Help Asthma Patients Inhale Medication Correctly / En inbäddad programvarudesign för att hjälpa astmapatienter andas in medicin korrekt

Lei, Yuchen January 2022 (has links)
Managing the respiratory diseases could be hard for many patients. Usually patients use the inhaler to administrate medicine on a regular basis. Even though the inhaler guideline is well-accepted, most patients make mistakes. In the recent years, smart inhalers with sensors have shown a great potential of guiding the daily use of the inhaler and better understanding the diseases. KTH MedTech startup Andning Med AB specializes on developing smart add-on hardware device to the inhaler. This thesis work is the continuation of the prototyping of the embedded software for the add-on device. The main goal of the thesis work is to develop a robust software for the hardware device to guide the inhaler use in real time, and collect and manage the inhaler data. To approach the problem, I use the Finite-state machine modelling and the object-oriented programming mindset. After the software development and testing, all the designed functionalities are achieved. The user could be visually guided by the device. The inhaler data could be correctly collected and uploaded to the mobile device. The thesis work could serve as a basis for further embedded software development for the device that will end up in the smart inhaler market in the future. It could also give reference to the similar IoT device development. / Att hantera luftvägssjukdomarna kan vara svårt för många patienter. Vanligtvis använder patienter inhalatorn för att administrera medicin regelbundet. Även om inhalatorns riktlinje är väl accepterad gör de flesta patienter misstag. Under de senaste åren har smarta inhalatorer med sensorer visat en stor potential att vägleda den dagliga användningen av inhalatorn och bättre förstå sjukdomarna. KTH MedTech startup Andning Med AB har specialiserat sig på att utveckla smarta tilläggsutrustning till inhalatorn. Detta examensarbete är en fortsättning på prototypframställningen av den inbäddade programvaran för tilläggsenheten. Huvudmålet med examensarbetet är att utveckla en robust mjukvara för hårdvaruenheten för att styra inhalatoranvändningen i realtid, samt samla in och hantera inhalatordata. För att närma mig problemet använder jag Finite-state maskinmodellering och det objektorienterade programmeringstänket. Efter mjukvaruutveckling och testning uppnås alla designade funktioner. Användaren kan visuellt guidas av enheten. Inhalatordata kunde samlas in korrekt och laddas upp till den mobila enheten. Examensarbetet kan fungera som en grund för ytterligare inbäddad mjukvaruutveckling för enheten som kommer att hamna på marknaden för smarta inhalatorer i framtiden. Det kan också hänvisa till liknande utveckling av IoT-enheter.
19

Mimicking human player strategies in fighting games using game artificial intelligence techniques

Saini, Simardeep S. January 2014 (has links)
Fighting videogames (also known as fighting games) are ever growing in popularity and accessibility. The isolated console experiences of 20th century gaming has been replaced by online gaming services that allow gamers to play from almost anywhere in the world with one another. This gives rise to competitive gaming on a global scale enabling them to experience fresh play styles and challenges by playing someone new. Fighting games can typically be played either as a single player experience, or against another human player, whether it is via a network or a traditional multiplayer experience. However, there are two issues with these approaches. First, the single player offering in many fighting games is regarded as being simplistic in design, making the moves by the computer predictable. Secondly, while playing against other human players can be more varied and challenging, this may not always be achievable due to the logistics involved in setting up such a bout. Game Artificial Intelligence could provide a solution to both of these issues, allowing a human player s strategy to be learned and then mimicked by the AI fighter. In this thesis, game AI techniques have been researched to provide a means of mimicking human player strategies in strategic fighting games with multiple parameters. Various techniques and their current usages are surveyed, informing the design of two separate solutions to this problem. The first solution relies solely on leveraging k nearest neighbour classification to identify which move should be executed based on the in-game parameters, resulting in decisions being made at the operational level and being fed from the bottom-up to the strategic level. The second solution utilises a number of existing Artificial Intelligence techniques, including data driven finite state machines, hierarchical clustering and k nearest neighbour classification, in an architecture that makes decisions at the strategic level and feeds them from the top-down to the operational level, resulting in the execution of moves. This design is underpinned by a novel algorithm to aid the mimicking process, which is used to identify patterns and strategies within data collated during bouts between two human players. Both solutions are evaluated quantitatively and qualitatively. A conclusion summarising the findings, as well as future work, is provided. The conclusions highlight the fact that both solutions are proficient in mimicking human strategies, but each has its own strengths depending on the type of strategy played out by the human. More structured, methodical strategies are better mimicked by the data driven finite state machine hybrid architecture, whereas the k nearest neighbour approach is better suited to tactical approaches, or even random button bashing that does not always conform to a pre-defined strategy.
20

Online Deception Detection Using BDI Agents

Merritts, Richard Alan 01 January 2013 (has links)
This research has two facets within separate research areas. The research area of Belief, Desire and Intention (BDI) agent capability development was extended. Deception detection research has been advanced with the development of automation using BDI agents. BDI agents performed tasks automatically and autonomously. This study used these characteristics to automate deception detection with limited intervention of human users. This was a useful research area resulting in a capability general enough to have practical application by private individuals, investigators, organizations and others. The need for this research is grounded in the fact that humans are not very effective at detecting deception whether in written or spoken form. This research extends the deception detection capability research in that typical deception detection tools are labor intensive and require extraction of the text in question following ingestion into a deception detection tool. A neural network capability module was incorporated to lend the resulting prototype Machine Learning attributes. The prototype developed as a result of this research was able to classify online data as either "deceptive" or "not deceptive" with 85% accuracy. The false discovery rate for "deceptive" online data entries was 20% while the false discovery rate for "not deceptive" was 10%. The system showed stability during test runs. No computer crashes or other anomalous system behavior were observed during the testing phase. The prototype successfully interacted with an online data communications server database and processed data using Neural Network input vector generation algorithms within seconds

Page generated in 0.1497 seconds