Spelling suggestions: "subject:"data collection"" "subject:"data eollection""
261 |
The use of interpreter in healthcare : Perspectives of individuals, healthcare staff and familiesHadziabdic, Emina January 2011 (has links)
This thesis focuses on the use of interpreters in Swedish healthcare. The overall aim was to explore how individuals, healthcare professionals and family members experience and perceive the use of interpreters in healthcare. The study design was explorative and descriptive. The thesis included Serbo-Croatian(Bosnian/Croatian/Serbian)speaking individuals(n=17), healthcare professionals(n=24), official documents(n=60)and family members(n=10)of individuals using interpreters in healthcare. Individual interviews, written descriptions, review of official documents in the form of incident reports from a single case study and focus group interviews were used to collect data. Data were analyzed using phenomenography, qualitative content analysis and qualitative data analysis of focus group interviews. The overall finding from all perspectives was the wish to have a qualified interpreter whose role was as a communication aid but also as a practical and informative guide in healthcare. The perception of a qualified interpreter was someone highly skilled in medical terminology, Swedish and individuals’ native language with ability to adapt to different dialects, wearing non-provocative and neutral clothes, of the same gender, with a professional attitude and preferably in personal contact through face-to-face interaction. Besides being a communication aid, the interpreter was perceived as having an important role in helping individuals to find the right way to and within the healthcare system because foreign-born individuals were unable to understand information in healthcare. Another aspect was to have a well-developed organization with good cooperation between the parties involved in the interpretation situation, such as patients, interpreter, interpreter agency, family members and healthcare professionals to offer a good interpretation situation. In conclusion, the use of an interpreter was determined by individual and healthcare situational factors. Individualized holistic healthcare can be achieved by offering and using high-quality interpreters and cooperation within a well-developed interpreter organization. Keywords: communication, healthcare service, patient-safe quality care, qualitative data collection, qualitative data analysis, users’ perceptions/experiences, utilization of interpreters.
|
262 |
Varying data quality and effects in economic analysis and planningEklöf, Jan A. January 1992 (has links)
Economic statistics are often taken as given facts, assumed to describe exactly, actual phenomena in society. Many economic series are published in various forms from preliminary, via revisions to definitive estimates. Preliminary series are issued for a number of central economic processes in order to allow for rapid, up-to-date signals. This dissertation focuses on qualitative aspects of available data, and effects of possible inaccuracy when data are used for economic modelling, analysis and planning. Four main questions are addressed: How to characterize quality of data for central economic time series? What effects may possible inaccuracies in data have when used in econometric modelling? What effects do inaccuracies and errors in data have when models are used for economic analysis and planning? Is it possible to specify a criterion for deciding the cost-effective quality of data to be produced as input for economic policy analysis? The various realizations of economic variables often show considerable systematic as well as stochastic discrepancies for the same quantity. Preliminary series are generally found to be of questionable quality, but still considerably better than simple trend forecasts. Compared with the situation in a few other industrialized countries, the variability of Swedish economic statistics is, though, not extraordinary. Illustrations of effects of using inaccurate data, especially of combining preliminary, revised and definitive observations in the same model, are presented. Such inconsistent combinations of various realizations are in actual fact found in many open sources. Inclusion of preliminary series tends to indicate stronger changes in the economy than when definite observations are used throughout. The study is concluded with a section on cost-benefit aspects of economic statistics, and a sketch model for appraising data of variable quality is proposed. / Diss. Stockholm : Handelshögsk.
|
263 |
A framework of vision-based detection-tracking surveillance systems for counting vehiclesKamiya, Keitaro 13 November 2012 (has links)
This thesis presents a framework for motor vehicle detection-tracking surveillance systems. Given an optimized object detection template, the feasibility and effectiveness of the methodology is considered for vehicle counting applications, implementing both a filtering operation of false detection, based on the speed variability in each segment of traffic state, and an occlusion handling technique which considers the unusual affine transformation of tracking subspace, as well as its highly fluctuating averaged acceleration data. The result presents the overall performance considering the trade-off relationship between true detection rate and false detection rate. The filtering operation achieved significant success in removing the majority of non-vehicle elements that do not move like a vehicle. The occlusion handling technique employed also improved the systems performance, contributing counts that would otherwise be lost. For all video samples tested, the proposed framework obtained high correct count (>93% correct counting rate) while simultaneously minimizing the false count rate. For future research, the author recommends the use of more sophisticated filters for specific sets of conditions as well as the implementation of discriminative classifier for detecting different occlusion cases.
|
264 |
Wavelet-based Data Reduction and Mining for Multiple Functional DataJung, Uk 12 July 2004 (has links)
Advance technology such as various types of automatic data
acquisitions, management, and networking systems has created a
tremendous capability for managers to access valuable production
information to improve their operation quality and efficiency.
Signal processing and data mining techniques are more popular than
ever in many fields including intelligent manufacturing. As data
sets increase in size, their exploration, manipulation, and
analysis become more complicated and resource consuming. Timely
synthesized information such as functional data is needed for
product design, process trouble-shooting, quality/efficiency
improvement and resource allocation decisions. A major obstacle in
those intelligent manufacturing system is that tools for
processing a large volume of information coming from numerous
stages on manufacturing operations are not available. Thus, the
underlying theme of this thesis is to reduce the size of data in a
mathematical rigorous framework, and apply existing or new
procedures to the reduced-size data for various decision-making
purposes. This thesis, first, proposes {it Wavelet-based
Random-effect Model} which can generate multiple functional data
signals which have wide fluctuations(between-signal variations) in
the time domain. The random-effect wavelet atom position in the
model has {it locally focused impact} which can be distinguished
from other traditional random-effect models in biological field.
For the data-size reduction, in order to deal with heterogeneously
selected wavelet coefficients for different single curves, this
thesis introduces the newly-defined {it Wavelet Vertical Energy}
metric of multiple curves and utilizes it for the efficient data
reduction method. The newly proposed method in this thesis will
select important positions for the whole set of multiple curves by
comparison between every vertical energy metrics and a threshold
({it Vertical Energy Threshold; VET}) which will be optimally
decided based on an objective function. The objective function
balances the reconstruction error against a data reduction ratio.
Based on class membership information of each signal obtained,
this thesis proposes the {it Vertical Group-Wise Threshold}
method to increase the discriminative capability of the
reduced-size data so that the reduced data set retains salient
differences between classes as much as possible. A real-life
example (Tonnage data) shows our proposed method is promising.
|
265 |
Security Schemes for Wireless Sensor Networks with Mobile SinkRasheed, Amar Adnan 2010 May 1900 (has links)
Mobile sinks are vital in many wireless sensor applications for efficient data collection,
data querying, and localized sensor reprogramming. Mobile sinks prolong the lifetime of
a sensor network. However, when sensor networks with mobile sinks are deployed in a
hostile environment, security became a critical issue. They become exposed to varieties
of malicious attacks. Thus, anti threats schemes and security services, such as mobile
sink?s authentication and pairwise key establishment, are essential components for the
secure operation of such networks.
Due to the sensors, limited resources designing efficient security schemes with
low communication overhead to secure communication links between sensors and MS
(Mobile Sink) is not a trivial task. In addition to the sensors limited resources, sink mobility
required frequent exchange of cryptography information between the sensors and
MS each time the MS updates its location which imposes extra communication overhead
on the sensors.
In this dissertation, we consider a number of security schemes for WSN (wireless
sensor network) with MS. The schemes offer high network?s resiliency and low communication
overhead against nodes capture, MS replication and wormhole attacks.
We propose two schemes based on the polynomial pool scheme for tolerating
nodes capture: the probabilistic generation key pre-distribution scheme combined with
polynomial pool scheme, and the Q-composite generation key scheme combined with
polynomial pool scheme. The schemes ensure low communication overhead and high
resiliency.
For anti MS replication attack scheme, we propose the multiple polynomial
pools scheme that provide much higher resiliency to MS replication attack as compared
to the single polynomial pool approach.
Furthermore, to improve the network resiliency against wormhole attack, two defensive
mechanisms were developed according to the MS mobility type. In the first
technique, MS uses controlled mobility. We investigate the problem of using a single
authentication code by sensors network to verify the source of MS beacons, and then we
develop a defensive approach that divide the sensor network into different authentication
code?s grids. In the second technique, random mobility is used by MS. We explore the
use of different communication channels available in the sensor hardware combined with
polynomial pool scheme.
|
266 |
Employee selection : Mechanisms behind practitioners’ preference for hiring practicesLanghammer, Kristina January 2013 (has links)
Despite the great advances science has made in developing selection decision aids practitioners’ generally remain reluctant to adopt them. This phenomenon is considered today one of the greatest gaps in industrial, work and organizational psychology. This thesis adopts a psychological approach to practitioners’ resistance toward hiring procedures with high predictive validity of work performance. Consequently, three specific research questions were examined, two of which highlighted aspects of self-regulation, and one focused on agency relation in order to study outcomes in terms of actual use of hiring procedures and intention to change hiring procedures. The present thesis comprises three studies. Questionnaire data is used in two studies (Study I and II) to study how 1) prototype beliefs and ability to evaluate the quality of own performance is related to use of selection decision methods; and also how 2) behavioral intention to change hiring practice is related to self-efficacy beliefs, causal attribution and past behavior. Data collected with semi-structured interviews is used in Study III in order to study practitioners’ experiences in collaborative contexts in employee selection. Study I found that prototype beliefs and task quality ambiguity perceptions varied across various hiring practices. The results from Study II showed that self-efficacy beliefs, external attributions of success and internal attributions of failure were related to intention to change hiring practices. Study III highlighted the prevalence of separate self-interests over more general organizational interests in the agentic relation between practitioners. In conclusion, the present thesis has implication for theory as well as practice when it concludes that conscious steered cognitive mechanisms are important for understanding practitioners’ resistance towards high standardized hiring practices. / <p>At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 1: Manuscript. Paper 3: Manuscript.</p>
|
267 |
Experimentelle Untersuchung der Elektronendichte von RuAl2 - Optimierung der Datensammlung für Beugungsexperimente an EinkristallenWedel, Michael 02 December 2014 (has links) (PDF)
Ziel der vorliegenden Dissertation war, die Elektronendichte von RuAl2 anhand von Röntgenbeugungsdaten mit Hilfe des Multipolmodells [1] zu rekonstruieren, um Erkenntnisse zur chemischen Bindung in dieser Substanz zu erlangen. Im Gegensatz zu organischen Molekülen machen diese bei einer intermetallischen Verbindung wie RuAl2 nur einen kleinen Anteil an der Gesamtelektronenzahl aus, wodurch die Methode an ihre Grenzen stößt. RuAl2 kristallisiert im TiSi2 -Strukturtyp [2], die Kristallstruktur kann als Abfolge gegeinander verschobener pseudohexagonaler Schichten verstanden werden.
Der erste Schritt auf dem Weg zu einer erfolgreichen Dichterekonstruktion besteht in der Synthese eines geeigneten Kristalls. Die hohen Synthesetemperaturen von über 1500 °C, die bei der Arbeit mit Schmelzen im System Ruthenium – Aluminium benötigt werden, wurden durch den Einsatz von Zinn als Lösungsmittel umgangen. Auf diese Weise konnten bei Temperaturen unter 1000 °C Kristalle gezüchtet werden. Die Analyse mittels elektronenmikroskopischer Methoden zeigte, dass auf diese Weise Kristalle erhalten werden können, die frei von Verunreinigungen sind. Erste Röntgenbeugungsexperimente zeigten auch, dass die Kristalle gut geordnet sind und bis zu hoher Auflösung Daten liefern. Um unerwünschte Effekte wie Extinktion und thermische Bewegung der Atome, sowie thermisch diffuse Streuung zu minimieren wurden die eigentlichen Diffraktionsversuche an einem sehr kleinen Kristall (Durchmesser 15 μm) mit kurzwelliger Synchrotronstrahlung (λ = 0,41328 Å) bei 25 K durchgeführt.
Es konnte ein Datensatz von sehr hoher Qualität gesammelt werden, der zur Verfeinerung des Strukturmodells genutzt wurde. Dabei trat eine sehr geringe Stapelfehlordnung zu Tage, die auf die enge Verwandtschaft des MoSi2 -Strukturtyps mit dem TiSi2 -Typ, zurückzuführen ist. Trotz des sehr geringen Fehlordnungsanteils (0,3 %) wurde die Strukturverfeinerung anhand des Multipolmodells durchgeführt. Die aus dem Modell rekonstruierte Elektronendichte wurde bezüglich ihrer Topologie untersucht, wobei innerhalb der pseudohexagonalen Schichten sowohl Ru – Al als auch Al – Al Wechselwirkungen nachgewiesen wurden, während zwischen den Schichten ausschließlich Ru – Al-Bindungen zu finden sind.
Um die entsprechende Datensammlung weiter zu verbessern wurde parallel zu den Experimenten ein Computerprogramm entwickelt, das die Datensammlungsstrategie des Beugungsexperimentes optimieren soll. Bei der Strategiesuche handelt es sich um eine Variante des Problems des Handlungsreisenden (Travelling Salesman) und stellt somit bereits bei einer moderaten Reflexzahl eine enorme kombinatorische Aufgabe dar [3, 4]. Um möglichst gute Näherungslösungen zu finden, benutzt das Programm den Simulated Annealing-Algorithmus [5]. Dieser generiert Lösungsvorschläge, indem er die Parameter der Messung zufällig variiert und den resultierenden Datensatz simuliert. Der Algorithmus beurteilt die Güte einer Lösung anhand einer Kostenfunktion. Im Falle der Strategieoptimierung wird der Wert dieser Funktion aus bestimmten Indikatoren für Datenqualität berechnet.
Da in diesem Stadium des Experiments in der Regel noch kein Strukturmodell besteht, kann in der Simulation nicht auf Intensitätsinformation zurückgegriffen werden. Deshalb muss auf Qualitätsindikatoren zurückgegriffen werden, die nicht auf Intensitätsinformation basieren. Vollständigkeit und Redundanz sind an dieser Stelle besonders wichtig und können mit Hilfe der verfügbaren Informationen berechnet werden. Allerdings können sich sowohl Simulation als auch die Berechnung der Kostenfunktion bei verschiedenen Experimenten gravierend voneinander unterscheiden. Diese Tatsache floss bereits ganz am Anfang als eine der Hauptanforderungen in den Entwurf des Programms ein. Sämtliche Berechnungen und Teilalgorithmen werden als Plugins in die Software eingebunden, wodurch diese beliebig erweiterbar ist. Die unterschiedlichen Berechnungen können vom Benutzer auf vielfältige Art und Weise miteinander kombiniert werden.
Zur Überprüfung der Tauglichkeit für die Strategieoptimierung wurde zunächst angestrebt bereits gemessene Datensätze anhand einer Simulation auf Basis der Ewaldkonstruktion zu reproduzieren. Nachdem dies gelang, wurde eine Messstrategie für ein konkretes strukturchemisches Problem erarbeitet. Es handelte sich um die Aufklärung einer sehr kleinen Fehlordnung in CeIrIn5 [6], deren Nachweis mit Hilfe der verbesserten Beugungsdaten untermauert werden konnte.
[1] N. K. Hansen und P. Coppens. „Testing aspherical atom refinements on small-molecule data sets“. Acta Crystallogr. A 34 (1978), S. 909–921.
[2] L.-E. Edshammar. „An X-Ray Investigation of Ruthenium-Aluminium Alloys“. Acta Chem. Scand. 20 (1966), S. 427–431.
[3] R. G. Bland und D. F. Shallcross. Large Travling Salesman Problems Arising From Experiments In X-Ray Crystallography: A Preliminary Report On Computation. Ithaca, New York: Cornell University, 1987.
[4] Z. Dauter. „Data-collection strategies“. Acta Crystallogr. D 55 (1999), S. 1703–1717.
[5] S. Kirkpatrick, C. D. Gelatt und M. P. Vecchi. „Optimization by Simulated Annealing“. Science 220 (1983), S. 671–680.
[6] S. Wirth u. a. „Structural investigations of CeIrIn5 and CeCoIn5 on macroscopic and atomic length scales“. J. Phys. Soc. Jpn. 83(6):061009, 2014.
|
268 |
A computational framework for unsupervised analysis of everyday human activitiesHamid, Muhammad Raffay. January 2008 (has links)
Thesis (Ph.D.)--Computing, Georgia Institute of Technology, 2009. / Committee Chair: Aaron Bobick; Committee Member: Charles Isbell; Committee Member: David Hogg; Committee Member: Irfan Essa; Committee Member: James Rehg
|
269 |
Cross-layer design applied to small satellites for data collection / Conception cross-layer d’une architecture de collecte de données pour petits satellites à défilementAlmonacid Zamora, Vicente 28 November 2017 (has links)
Avec l'introduction des plate-formes CubeSat, le nombre de petits satellites lancés dans l'espace a grandi de manière importante pendant les deux dernières décennies.Étant développés initialement par des universités et des centres de recherche pour des simples tests technologiques ou des expériences académiques, ces plate-formes aujourd'hui permettent d'envisager de nouvelles applications et services.Dans cette thèse, nous nous intéressons à l'usage de petits satellites à défilement pour des réseaux globaux de collecte de données et, plus généralement, pour des applications de type machine-to-machine (M2M).En raison des contraintes existantes tant au segment sol comme au segment spatial, la capacité du canal de transmission est fortement limitée---notamment celle du lien montant, qui correspond à un canal à accès multiple.Ces réseaux sont aussi caractérisés par des très petits messages arrivant au système de manière imprévisible, ce qui implique que toute redondance liée au protocole a un impact important sur l’efficacité spectrale. Ainsi, des méthodes d'accès aléatoires sont souvent préférés pour le lien montant.Relever ces défis nécessite d'aborder l'optimisation de la transmission de manière holistique. Plus spécifiquement, la conception des couches physiques (PHY) et de contrôle d'accès au support (MAC, de l'anglais Media Access Control) doit être menée de manière conjointe.Les principales contributions de cette thèse portent sur l'étude du protocole Time-- and Frequency--Asynchronous ALOHA (TFAA), une technique d'accès aléatoire utilisée dans des réseaux terrestres à modulation de bande étroite. En réduisant significativement le débit binaire de transmission, TFAA permet notamment d'établir des liaisons à longue portée et/ou à faible consommation énergétique, dont des systèmes M2M par satellite sont un exemple.D'abord, nous évaluons les performances au niveau MAC (i.e., le taux d'utilisation de canal et la probabilité d'erreur de packet) sous trois différents modèles de réception: le modèle de collisions, le modèle de capture et un modèle plus détaillé qui prend en compte les paramètres de la couche PHY.À partir de ce dernier modèle, nous étudions ensuite l'impact de certains paramètres de la couche PHY sur les performances au niveau MAC.Afin d'améliorer la performance de TFAA, nous proposons Contention Resolution Time-- and Frequency--Asynchronous ALOHA (CR-TFAA), une solution plus sophistiquée intégrant des techniques de suppressions successives d'interférences.Enfin, nous étudions les bénéfices obtenus en exploitant le compromis <<performance--délai de bout-en-bout>> en utilisant des techniques simples telles qu'un système de contrôle de transmission et le codage au niveau packet. / With the introduction of the CubeSat standard, the number of small-satellite missions has increased dramatically over the last two decades.Initially developed by universities and research centres for technology validation and academic experiments, these low-cost platforms currently allow to perform a variety of advanced, novel applications.In this thesis we are interested in the use of small satellites for global data collection and, more generally, for Internet of Things (IoT) and machine-to-machine (M2M) applications.Since both the space and ground segments are subject to stringent constraints in terms of size and mass, the overall capacity of the communications channel is highly limited, specially that of the uplink, which is a multi-access channel.These systems are also characterised by bursty, short messages, meaning that any protocol overhead may have a significant impact on the bandwidth efficiency. Hence, a random access approach is usually adopted for the uplink.Facing these challenges requires to optimize the communication system by taking an holistic approach. In particular, a joint design of both the physical (PHY) and Medium Access Control (MAC) layers is needed.The main contributions of this thesis are related to the study of Time-- and Frequency--Asynchronous ALOHA (TFAA), a random access approach adopted in terrestrial ultra narrowband (UNB) networks. By trading data rate for communication range or transmission power, TFAA is particularly attractive in power constrained applications such as low power wide area networks and M2M over satellite. First, we evaluate its MAC performance (i.e., its throughput and packet error rate) under three different reception models: the collision channel, the capture channel and a more detailed model that takes into account the PHY layer design.Then, we study the impact of PHY layer parameters, such as forward error correction (FEC), pulse shaping filter and modulation order, on the MAC performance.We show that, due to the characteristics of the multiple access interference, significant improvements can be obtained by applying low-rate FEC.To further improve TFAA's performance, we propose Contention Resolution Time-- and Frequency--Asynchronous ALOHA (CR-TFAA), a more advanced design which is in line with recent developments such as Asynchronous Contention Resolution Diversity ALOHA (ACRDA).Under the same set of hypothesis, we see that CR-TFAA provides similar and even better performance than ACRDA, with a decrease in the packet error rate of at least one order of magnitude.Finally, we study the benefits that can be obtained by trading delay for MAC performance and energy efficiency, using simple techniques such as transmission control and packet-layer erasure coding.
|
270 |
Discovering and Mitigating Social Data BiasJanuary 2017 (has links)
abstract: Exabytes of data are created online every day. This deluge of data is no more apparent than it is on social media. Naturally, finding ways to leverage this unprecedented source of human information is an active area of research. Social media platforms have become laboratories for conducting experiments about people at scales thought unimaginable only a few years ago.
Researchers and practitioners use social media to extract actionable patterns such as where aid should be distributed in a crisis. However, the validity of these patterns relies on having a representative dataset. As this dissertation shows, the data collected from social media is seldom representative of the activity of the site itself, and less so of human activity. This means that the results of many studies are limited by the quality of data they collect.
The finding that social media data is biased inspires the main challenge addressed by this thesis. I introduce three sets of methodologies to correct for bias. First, I design methods to deal with data collection bias. I offer a methodology which can find bias within a social media dataset. This methodology works by comparing the collected data with other sources to find bias in a stream. The dissertation also outlines a data collection strategy which minimizes the amount of bias that will appear in a given dataset. It introduces a crawling strategy which mitigates the amount of bias in the resulting dataset. Second, I introduce a methodology to identify bots and shills within a social media dataset. This directly addresses the concern that the users of a social media site are not representative. Applying these methodologies allows the population under study on a social media site to better match that of the real world. Finally, the dissertation discusses perceptual biases, explains how they affect analysis, and introduces computational approaches to mitigate them.
The results of the dissertation allow for the discovery and removal of different levels of bias within a social media dataset. This has important implications for social media mining, namely that the behavioral patterns and insights extracted from social media will be more representative of the populations under study. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2017
|
Page generated in 0.0933 seconds