• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • Tagged with
  • 68
  • 68
  • 68
  • 52
  • 33
  • 28
  • 16
  • 15
  • 14
  • 12
  • 11
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Quantifying Parkinson's Disease Symptoms Using Mobile Devices

Aylward, Charles R 01 December 2016 (has links)
Current assessments for evaluating the progression of Parkinson’s Disease are largely qualitative and based on small sets of data obtained from occasional doctor-patient interactions. There is a clinical need to improve the techniques used for mitigating common Parkinson’s Disease symptoms. Available data sets for researching the disease are minimal, hindering advancement toward understanding the underlying causes and effectiveness of treatment and therapies. Mobile devices present an opportunity to continuously monitor Parkinson’s Disease patients and collect important information regarding the severity of symptoms. The evolution of digital technology has opened doors for clinical research to extend beyond the clinic by incorporating complex sensors in commonly used devices. Leveraging these sensors to quantify characteristic Parkinson’s Disease symptoms may drastically improve patient care and the reliability of symptom assessment. The goal of this project is to design and develop a system for measuring and analyzing the cardinal symptoms of Parkinson’s using mobile devices. An application for the iPhone and Apple Watch is developed, utilizing the sensors on the devices to collect data during the performance of motor tasks. Assessments for tremor, bradykinesia, and postural instability are implemented to mimic UPDRS evaluations normally performed by a neurologist. The application connects to a cloud-based server to transfer the collected data for remote access and analysis. Example MatLab analysis demonstrates potential approaches for extracting meaningful data to be used for monitoring the progression of Parkinson’s Disease and the effectiveness of treatment and therapies. High-level verification testing is performed to show general efficacy of the assessment tasks. The system design successfully lays the groundwork for a mobile device-based assessment tool to objectively measure Parkinson’s Disease symptoms
32

Self-Healing Cellular Automata to Correct Soft Errors in Defective Embedded Program Memories

Voddi, Varun 01 December 2009 (has links)
Static Random Access Memory (SRAM) cells in ultra-low power Integrated Circuits (ICs) based on nanoscale Complementary Metal Oxide Semiconductor (CMOS) devices are likely to be the most vulnerable to large-scale soft errors. Conventional error correction circuits may not be able to handle the distributed nature of such errors and are susceptible to soft errors themselves. In this thesis, a distributed error correction circuit called Self-Healing Cellular Automata (SHCA) that can repair itself is presented. A possible way to deploy a SHCA in a system of SRAM-based embedded program memories (ePM) for one type of chip multi-processors is also discussed. The SHCA is compared with conventional error correction approaches and its strengths and limitations are analyzed.
33

A Quantitative Analysis of Memory Controller Page Policies

Blackmore, Matthew 28 February 2013 (has links)
Two common goals in computing system design are increasing performance and decreasing power consumption. DRAM-based memory subsystems are a major component of both system performance and power consumption. Memory controllers employ strategies to efficiently schedule DRAM operations to reduce latency and to utilize DRAM low power modes when possible. One of the most important of these is the page policy, which determines when to close pages in DRAM. An effective DRAM memory controller page policy is important to minimizing power consumption and increasing system performance. This thesis explores the impact memory controller page policy has on performance as measured by the number of page-hits minus page-misses and estimated average memory access latency. I captured real-time DDR3 command and address memory traces for the SPEC CPU2006 benchmarks under three memory controller page policies: closed page, fixed open-page, and Intel's adaptive open-page [1]. Traces were captured using a programmable memory traffic analyzer (PMTA), a device interposed between the DIMM slot and DDR3 DIMM on the motherboard. The memory traces for each benchmark were analyzed to determine the absolute number of page-hits and page-misses that occurred. In software post-processing I simulated a theoretically perfect "oracle" page policy for each captured trace to compare the efficiency of existing policies. The SPEC CPU 2006 benchmarks under the oracle page policy for each trace exhibited an average increase in the number of page-hits minus page-misses of 280.3% and an average decrease in the average memory latency of 11.1%. Two new adaptive open-page policies are proposed and simulated using the captured memory traces. These proposed policies result in an average increase of 74.8% and 62.4% in the number of page-hits minus page-misses over Intel's adaptive open-page policy and an average decrease in the average memory latency of 3.8% and 3.4%.
34

Ranked Similarity Search of Scientific Datasets: An Information Retrieval Approach

Megler, Veronika Margaret 04 June 2014 (has links)
In the past decade, the amount of scientific data collected and generated by scientists has grown dramatically. This growth has intensified an existing problem: in large archives consisting of datasets stored in many files, formats and locations, how can scientists find data relevant to their research interests? We approach this problem in a new way: by adapting Information Retrieval techniques, developed for searching text documents, into the world of (primarily numeric) scientific data. We propose an approach that uses a blend of automated and curated methods to extract metadata from large repositories of scientific data. We then perform searches over this metadata, returning results ranked by similarity to the search criteria. We present a model of this approach, and describe a specific implementation thereof performed at an ocean-observatory data archive and now running in production. Our prototype implements scanners that extract metadata from datasets that contain different kinds of environmental observations, and a search engine with a candidate similarity measure for comparing a set of search terms to the extracted metadata. We evaluate the utility of the prototype by performing two user studies; these studies show that the approach resonates with users, and that our proposed similarity measure performs well when analyzed using standard Information Retrieval evaluation methods. We performed performance tests to explore how continued archive growth will affect our goal of interactive response, developed and applied techniques that mitigate the effects of that growth, and show that the techniques are effective. Lastly, we describe some of the research needed to extend this initial work into a true "Google for data".
35

Entropy reduction of English text using variable length grouping

Ast, Vincent Norman 01 July 1972 (has links)
It is known that the entropy of English text can be reduced by arranging the text into groups of two or more letters each. The higher the order of the grouping the greater is the entropy reduction. Using this principle in a computer text compressing system brings about difficulties, however, because the number of entries required in the translation table increases exponentially with group size. This experiment examined the possibility of using a translation table containing only selected entries of all group sizes with the expectation of obtaining a substantial entropy reduction with a relatively small table. An expression was derived that showed that the groups which should be included in the table are not necessarily those that occur frequently but rather occur more frequently than would be expected due to random occurrence. This was complicated by the fact that any grouping affects the frequency of occurrence of many other related groups. An algorithm was developed in which the table originally starts with the regular 26 letters of the alphabet and the space. Entries, which consist of letter groups, complete words, and word groups, are then added one by one based on the selection criterion. After each entry is added adjustments are made to account for the interaction of the groups. This algorithm was programmed on a computer and was run using a text sample of about 7000 words. The results showed that the entropy could easily be reduced down to 3 bits per letter with a table of less than 200 entries. With about 500 entries the entropy could be reduced to about 2.5 bits per letter. About 60% of the table was composed of letter groups, 42% of single words and 8% of word groups and indicated that the extra complications involved in handling word groups may not be worthwhile. A visual examination of the table showed that many entries were very much oriented to the particular sample. This may or may not be desirable depending on the intended use of the translating system.
36

SADDAS; a self-contained analog to digital data acquisition system.

Petersen, Walter Anton 01 January 1972 (has links)
SADDAS, a. Self-contained Analog to Digital Data Acquisition System, converts analog voltage inputs to formatted BCD (binary coded decimal digital magnetic tape. SADDAS consists of a 16 channel multiplexer, a 17 bit (4 digits + sign) 40 microsecond analog to digital converter, a 512 byte 8 bit core memory, a 30 IPS (inches per second) digital tape recorder at a density of 556 cpi (characters per inch), and a controller which integrates these instruments into a flexible and easy-to-use system. Sampling rates in excess of 360 samples per second may be used when converting seven channels of data, such as IRIG (Inter Range Instrumentation Group) analog magnetic tapes.
37

Efficient Social Network Data Query Processing on MapReduce

Liu, Liu 01 January 2013 (has links) (PDF)
Social network data analysis becomes increasingly important today. In order to improve the integration and reuse of their data, many social networks start to apply RDF to present the data. Accordingly, one common approach for social network data analysis is to employ SPARQL to query RDF data. As the sizes of social networks expand rapidly, queries need to be executed in parallel such as using the MapReduce framework. However, the state-of-the-art translation from SPARQL queries to MapReduce jobs mainly follows a two layer rule, in which SPARQL is first translated to SQL join, is not efficient. In this thesis, we introduce two primitives to enable automatic translation from SPARQL to MapReduce, and to enable efficient execution of the SPARQL queries. We use multiple-join-with-filter to substitute traditional SQL multiple join when feasible, and merge different stages in the MapReduce query workflow. The evaluation on social network benchmarks shows that these two primitives can achieve up to 2x speedup in query running time compared with the original two layer scheme.
38

Improvement of Statistical Process Control at St. Jude Medical's Cardiac Manufacturing Facility

Edwards, Christopher Lance 01 June 2012 (has links) (PDF)
Sig sigma is a methodology where companies strive to reproduce results ending up having a 99.9996% chance their product will be void of defects. In order for companies to reach six sigma, statistical process control (SPC) needs to be introduced. SPC has many different tools associated with it, control charts being one of them. Control charts play a vital role in managing how a process is behaving. Control charts allow users to identify special causes, or shifts, and can therefore change the process to keep producing good products, free of defects. There are many factories and manufacturing facilities having implemented some sort of statistical process control. St. Jude Medical implemented control charts to monitor different tools on the manufacturing line. How the data is entered and stored poses a difficult situation for the person monitoring the processes. The program used to keep the control charts is not user friendly and difficult to use. Another program can be produced to provide a greater level of efficiency. The goals of this project are to stress how important control charts are in the manufacturing world, what problems are currently seen for operators and supervisors, and how a new and improved program can help fix the current situation. This paper goes into the reasons for the change as well has what has been improved.
39

JDiet: Footprint Reduction for Memory-Constrained Systems

Huffman, Michael John 01 June 2009 (has links) (PDF)
Main memory remains a scarce computing resource. Even though main memory is becoming more abundant, software applications are inexorably engineered to consume as much memory as is available. For example, expert systems, scientific computing, data mining, and embedded systems commonly suffer from the lack of main memory availability. This thesis introduces JDiet, an innovative memory management system for Java applications. The goal of JDiet is to provide the developer with a highly configurable framework to reduce the memory footprint of a memory-constrained system, enabling it to operate on much larger working sets. Inspired by buffer management techniques common in modern database management systems, JDiet frees main memory by evicting non-essential data to a disk-based store. A buffer retains a fixed amount of managed objects in main memory. As non-resident objects are accessed, they are swapped from the store to the buffer using an extensible replacement policy. While the Java virtual machine naïvely delegates virtual memory management to the operating system, JDiet empowers the system designer to select both the managed data and replacement policy. Guided by compile-time configuration, JDiet performs aspect-oriented bytecode engineering, requiring no explicit coupling to the source or compiled code. The results of an experimental evaluation of the effectiveness of JDiet are reported. A JDiet-enabled XML DOM parser is capable of parsing and processing over 200% larger input documents by sacrificing less than an order of magnitude in performance.
40

Amaethon – A Web Application for Farm Management and an Assessment of Its Utility

Yero, Tyler 01 December 2012 (has links) (PDF)
Amaethon is a web application that is designed for enterprise farm management. It takes a job typically performed with spreadsheets, paper, or custom software and puts it on the web. Farm administration personnel may use it to schedule farm operations and manage their resources and equipment. A survey was con- ducted to assess Amaethon’s user interface design. Participants in the survey were two groups of students and a small group of agriculture professionals. Among other results, the survey indicated that a calendar interface inside Amaethon was preferred, and statistically no less effective, than a map interface. This is despite the fact that a map interface was viewed by some users as a potentially important and effective component of Amaethon.

Page generated in 0.0757 seconds