• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1064
  • 446
  • 173
  • 95
  • 90
  • 87
  • 73
  • 48
  • 20
  • 19
  • 17
  • 11
  • 9
  • 8
  • 6
  • Tagged with
  • 2457
  • 554
  • 301
  • 290
  • 287
  • 282
  • 282
  • 279
  • 275
  • 274
  • 230
  • 203
  • 180
  • 176
  • 170
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1081

Design of a data acquisition system to control and monitor a velocity probe in a fluid flow field

Herwig, Nancy Lou January 1982 (has links)
A data acquisition system to control the position of a velocity probe, and to acquire digital voltages as an indication of fluid velocity is presented. This system replaces a similar manually operated traverse system, it relieves the operator of control and acquisition tasks, while providing a more consistent and systematic approach to the acquisition process. The design includes the TRS-80 microcomputer, with external interfacing accomplished using the STD-based bus design. / Master of Science
1082

Real time data acquisition for load management

Ghosh, Sushmita 15 November 2013 (has links)
Demand for Data Transfer between computers has increased ever since the introduction of Personal Computers (PC). Data Communicating on the Personal Computer is much more productive as it is an intelligent terminal that can connect to various hosts on the same I/O hardware circuit as well as execute processes on its own as an isolated system. Yet, the PC on its own is useless for data communication. It requires a hardware interface circuit and software for controlling the handshaking signals and setting up communication parameters. Often the data is distorted due to noise in the line. Such transmission errors are imbedded in the data and require careful filtering. The thesis deals with the development of a Data Acquisition system that collects real time load and weather data and stores them as historical database for use in a load forecast algorithm in a load management system. A filtering technique has been developed here that checks for transmission errors in the raw data. The microcomputers used in this development are the IBM PC/XT and the AT&T 3B2 supermicro computer. / Master of Science
1083

Event-related Collections Understanding and Services

Li, Liuqing 18 March 2020 (has links)
Event-related collections, including both tweets and webpages, have valuable information, and are worth exploring in interdisciplinary research and education. Unfortunately, such data is noisy, so this variety of information has not been adequately exploited. Further, for better understanding, more knowledge hidden behind events needs to be unearthed. Regarding these collections, different societies may have different requirements in particular scenarios. Some may need relatively clean datasets for data exploration and data mining. Social researchers require preprocessing of information, so they can conduct analyses. General societies are interested in the overall descriptions of events. However, few systems, tools, or methods exist to support the flexible use of event-related collections. In this research, we propose a new, integrated system to process and analyze event-related collections at different levels (i.e., data, information, and knowledge). It also provides various services and covers the most important stages in a system pipeline, including collection development, curation, analysis, integration, and visualization. Firstly, we propose a query likelihood model with pre-query design and post-query expansion to rank a webpage corpus by query generation probability, and retrieve relevant webpages from event-related tweet collections. We further preserve webpage data into WARC files and enrich original tweets with webpages in JSON format. As an application of data management, we conduct an empirical study of the embedded URLs in tweets based on collection development and data curation techniques. Secondly, we develop TwiRole, an integrated model for 3-way user classification on Twitter, which detects brand-related, female-related, and male-related tweeters through multiple features with both machine learning (i.e., random forest classifier) and deep learning (i.e., an 18-layer ResNet) techniques. As guidance to user-centered social research at the information level, we combine TwiRole with a pre-trained recurrent neural network-based emotion detection model, and carry out tweeting pattern analyses on disaster-related collections. Finally, we propose a tweet-guided multi-document summarization (TMDS) model, which generates summaries of the event-related collections by using tweets associated with those events. The TMDS model also considers three aspects of named entities (i.e., importance, relatedness, and diversity) as well as topics, to score sentences in webpages, and then rank selected relevant sentences in proper order for summarization. The entire system is realized using many technologies, such as collection development, natural language processing, machine learning, and deep learning. For each part, comprehensive evaluations are carried out, that confirm the effectiveness and accuracy of our proposed approaches. Regarding broader impact, the outcomes proposed in our study can be easily adopted or extended for further event analyses and service development. / Doctor of Philosophy / Event-related collections, including both tweets and webpages, have valuable information. They are worth exploring in interdisciplinary research and education. Unfortunately, such data is noisy. Many tweets and webpages are not relevant to the events. This leads to difficulties during data analysis of the datasets, as well as explanation of the results. Further, for better understanding, more knowledge hidden behind events needs to be unearthed. Regarding these collections, different groups of people may have different requirements. Some may need relatively clean datasets for data exploration. Some require preprocessing of information, so they can conduct analyses, e.g., based on tweeter type or content topic. General societies are interested in the overall descriptions of events. However, few systems, tools, or methods exist to support the flexible use of event-related collections. Accordingly, we describe our new framework and integrated system to process and analyze event-related collections. It provides varied services and covers the most important stages in a system pipeline. It has sub-systems to clean, manage, analyze, integrate, and visualize event-related collections. It takes an event-related tweet collection as input and generates an event-related webpage corpus by leveraging Wikipedia and the URLs embedded in tweets. It also combines and enriches original tweets with webpages. As an application of data management, we conduct an empirical study of tweets and their embedded URLs. We developed TwiRole for 3-way user classification on Twitter. It detects brand-related, female-related, and male-related tweeters through their profiles, tweets, and images. To aid user-centered social research, we combine TwiRole with an existing emotion detection tool, and carry out tweeting pattern analyses on disaster-related collections. Finally, we propose a tweet-guided multi-document summarization (TMDS) model and service, which generates summaries of the event-related collections by using tweets associated with those events. It extracts important sentences across different topics from webpages, and organizes them in proper order. The entire system is realized using many technologies, such as collection development, natural language processing, machine learning, and deep learning. For each part, comprehensive evaluations help confirm the effectiveness and accuracy of our proposed approaches. Regarding broader impact, our methods and system can be easily adopted or extended for further event analyses and service development.
1084

Autonomous Sample Collection Using Image-Based 3D Reconstructions

Torok, Matthew M. 14 May 2012 (has links)
Sample collection is a common task for mobile robots and there are a variety of manipulators available to perform this operation. This thesis presents a novel scoop sample collection system design which is able to both collect and contain a sample using the same hardware. To ease the operator burden during sampling the scoop system is paired with new semi-autonomous and fully autonomous collection techniques. These are derived from data provided by colored 3D point clouds produced via image-based 3D reconstructions. A custom robotic mobility platform, the Scoopbot, is introduced to perform completely automated imaging of the sampling area and also to pick up the desired sample. The Scoopbot is wirelessly controlled by a base station computer which runs software to create and analyze the 3D point cloud models. Relevant sample parameters, such as dimensions and volume, are calculated from the reconstruction and reported to the operator. During tests of the system in full (48 images) and fast (6-8 images) modes the Scoopbot was able to identify and retrieve a sample without any human intervention. Finally, a new building crack detection algorithm (CDA) is created to use the 3D point cloud outputs from image sets gathered by a mobile robot. The CDA was shown to successfully identify and color-code several cracks in a full-scale concrete building element. / Master of Science
1085

Screening Indian plant species for antiplasmodial properties – ethnopharmacological compared to random selection.

Kantamreddi, Venkata Siva Satya Narayana, Wright, Colin W. 01 1900 (has links)
No / In the search for biologically active plant species, many studies have shown that an ethnopharmacological approach is more effective than a random collection. In order to determine whether this is true in the case of plant species used for the treatment of malaria in Orissa, India, the antiplasmodial activities of extracts prepared from 25 traditionally used species were compared with those of 25 species collected randomly. As expected, plant species used traditionally for the treatment of malaria were more likely to exhibit antiplasmodial activity (21 species (84%) active against Plasmodium falciparum strain 3D7) than plant species collected randomly (9 species (32%)). However, of the nine active randomly collected species, eight had not previously been reported to possess antiplasmodial activity while one inactive species had been reported to be active in another study. Of the 21 active species of traditional antimalarial treatments, only six had been reported previously. This study suggests that while the selection of traditional medicinal plants is more predictive of antiplasmodial study, random collections may still be of value for the identification of new antiplasmodial species.
1086

Asset Management Data Collection for Supporting Decision Processes

Pantelias, Aristeidis 23 August 2005 (has links)
Transportation agencies engage in extensive data collection activities in order to support their decision processes at various levels. However, not all the data collected supply transportation officials with useful information for efficient and effective decision-making. This thesis presents research aimed at formally identifying links between data collection and the supported decision processes. The research objective identifies existing relationships between Asset Management data collection and the decision processes to be supported by them, particularly in the project selection level. It also proposes a framework for effective and efficient data collection. The motivation of the project was to help transportation agencies optimize their data collection processes and cut down data collection and management costs. The methodology used entailed two parts: a comprehensive literature review that collected information from various academic and industrial sources around the world (mostly from Europe, Australia and Canada) and the development of a web survey that was e-mailed to specific expert individuals within the 50 U.S. Departments of Transportation (DOTs) and Puerto Rico. The electronic questionnaire was designed to capture state officials' experience and practice on: asset management endorsement and implementation; data collection, management and integration; decision-making levels and decision processes; and identified relations between decision processes and data collection. The responses obtained from the web survey were analyzed statistically and combined with the additional resources in order to develop the proposed framework and recommendations. The results of this research are expected to help transportation agencies and organizations not only reduce costs in their data collection but also make more effective project selection decisions. / Master of Science
1087

Microscopic Control Delay Modeling at Signalized Arterials Using Bluetooth Technology

Rajasekhar, Lakshmi 10 January 2012 (has links)
Real-time control delay estimation is an important performance measure for any intersection to improve the signal timing plans dynamically in real-time and hence improve the overall system performance. Control delay estimates helps to determine the level-of-service (LOS) characteristics of various approaches at an intersection and takes into account deceleration delay, stopped delay and acceleration delay. All kinds of traffic delay calculation especially control delay calculation has always been complicated and laborious as there never existed a low-cost direct method to find them in real-time from the field. A recent validated technology called Bluetooth Median Access Control (MAC) ID matching traffic data collection technology seems to hold promise for continuous and cost-effective traffic data collection. Bluetooth traffic data synchronized with vehicle trajectory plot generated from GPS probe vehicle runs has been used to develop control delay models which has a potential to predict the control delays in real-time based on Bluetooth detection error parameters in field. Incorporating control delay estimates in real-time traffic control management would result in significant improvement in overall system performance. / Master of Science
1088

Hidden labour: The skilful work of clinical audit data collection and its implications for secondary use of data via integrated health IT

McVey, Lynn, Alvarado, Natasha, Greenhalgh, J., Elshehaly, Mai, Gale, C.P., Lake, J., Ruddle, R.A., Dowding, D., Mamas, M., Feltbower, R., Randell, Rebecca 26 July 2021 (has links)
Yes / Secondary use of data via integrated health information technology is fundamental to many healthcare policies and processes worldwide. However, repurposing data can be problematic and little research has been undertaken into the everyday practicalities of inter-system data sharing that helps explain why this is so, especially within (as opposed to between) organisations. In response, this article reports one of the most detailed empirical examinations undertaken to date of the work involved in repurposing healthcare data for National Clinical Audits. Methods: Fifty-four semi-structured, qualitative interviews were carried out with staff in five English National Health Service hospitals about their audit work, including 20 staff involved substantively with audit data collection. In addition, ethnographic observations took place on wards, in ‘back offices’ and meetings (102 hours). Findings were analysed thematically and synthesised in narratives. Results: Although data were available within hospital applications for secondary use in some audit fields, which could, in theory, have been auto-populated, in practice staff regularly negotiated multiple, unintegrated systems to generate audit records. This work was complex and skilful, and involved cross-checking and double data entry, often using paper forms, to assure data quality and inform quality improvements. Conclusions: If technology is to facilitate the secondary use of healthcare data, the skilled but largely hidden labour of those who collect and recontextualise those data must be recognised. Their detailed understandings of what it takes to produce high quality data in specific contexts should inform the further development of integrated systems within organisations.
1089

The Economic Cost of Privacy in Global Governance : The normative study of Association of Southeast Asian Nations (ASEAN) response to the mass data collection.

Nilsson Punthapong, Sheena January 2024 (has links)
A normative study of a regional organisation exercising governance using Global governance as a guiding theory. Association of Southeast Asian Nations (ASEAN) is one of the biggest regional organisations, often compared to the European Union (EU) in terms of efficacy and non-legal binding approach, as well as the non-conformity of western liberal ideology. This thesis conducts a case study of ASEAN through the lens of interpretivist ontology and epistemology using critical discourse analysis while considering the deviation of the regional history, experience, and identity. The inevitable fully leaning reliance on technology that runs the societal and political infrastructure today has resulted in many states and regions to develop their Privacy law or internet governance. The thesis analyses frameworks, publications, and dialogues among ASEAN Member states as well as their dialogue partners. The texts are placed within the discursive practices that ASEAN functions as a collective entity in international relations in which governance no longer requires an official body of government. ASEAN’s long record of cooperation has always been motivated by economic prosperity. There is a notable growing concerns of privacy which is in need of data protection, ASEAN has displayed the realisation as well as a potential and gradual shift into a mindset where digital footprint can transcend from a nascent norm into what other community might take for granted as a universal right to the general public and the basic obligation of the government.
1090

Addressing Fragmentation in ZGC through Custom Allocators : Leveraging a Lean, Mean, Free-List Machine

Sikström, Joel January 2024 (has links)
The Java programming language manages memory automatically through the use of a garbage collector (GC). The Java Virtual Machine provides several GCs tuned for different usage scenarios. One such GC is ZGC. Both ZGC and other GCs utilize bump-pointer allocation, which allocates objects compactly but leads to the creation of unusable memory gaps over time, known as fragmentation. ZGC handles fragmentation through relocation, a process which is costly. This thesis proposes an alternative memory allocation method leveraging free-lists to reduce the need for relocation to manage fragmentation.We design and develop a new allocator tailored for ZGC, based on the TLSF allocator by Masmano et al. Previous research on the customization of allocators shows varying results and does not fully investigate usage in complex environments like a GC.Opportunities for enhancements in performance and memory efficiency are identified and implemented through the exploration of ZGC's operational boundaries. The most significant adaptation is the introduction of a 0-byte header, which leverages information within ZGC to significantly reduce internal fragmentation of the allocator. We evaluate the performance of our adapted allocator and compare it to a reference implementation of TLSF. Results show that the adapted allocator performs on par with the reference implementation for single allocations but is slightly slower for single frees and when applying allocation patterns from real-world programs. The findings of this work suggest that customizing allocators for garbage collection is worth considering and may be useful for future integration.

Page generated in 0.0875 seconds