Spelling suggestions: "subject:administraciońńn+data" "subject:administració́ńn+data"
161 |
TREATMENT OF DATA WITH MISSING ELEMENTS IN PROCESS MODELLINGRAPUR, NIHARIKA 02 September 2003 (has links)
No description available.
|
162 |
SensAnalysis: A Big Data Platform for Vibration-Sensor Data AnalysisKumar, Abhinav 26 June 2019 (has links)
The Goodwin Hall building on the Virginia Tech campus is the most instrumented building for vibration monitoring. It houses 225 hard-wired accelerometers which record vibrations arising due to internal as well as external activities. The recorded vibration data can be used to develop real-time applications for monitoring the health of the building or detecting human activity in the building. However, the lack of infrastructure to handle the massive scale of the data, and the steep learning curve of the tools required to store and process the data, are major deterrents for the researchers to perform their experiments. Additionally, researchers want to explore the data to determine the type of experiments they can perform. This work tries to solve these problems by providing a system to store and process the data using existing big data technologies. The system simplifies the process of big data analysis by supporting code re-usability and multiple programming languages. The effectiveness of the system was demonstrated by four case studies. Additionally, three visualizations were developed to help researchers in the initial data exploration. / Master of Science / The Goodwin Hall building on the Virginia Tech campus is an example of a ‘smart building.’ It uses sensors to record the response of the building to various internal and external activities. The recorded data can be used by algorithms to facilitate understanding of the properties of the building or to detect human activity. Accordingly, researchers in the Virginia Tech Smart Infrastructure Lab (VTSIL) run experiments using a part of the complete data. Ideally, they want to run their experiments continuously as new data is collected. However, the massive scale of the data makes it difficult to process new data as soon as it arrives, and to make it available immediately to the researchers. The technologies that can handle data at this scale have a steep learning curve. Starting to use them requires much time and effort. This project involved building a system to handle these challenges so that researchers can focus on their core area of research. The system provides visualizations depicting various properties of the data to help researchers explore that data before running an experiment. The effectiveness of this work was demonstrated using four case studies. These case studies used the actual experiments conducted by VTSIL researchers in the past. The first three case studies help in understanding the properties of the building whereas the final case study deals with detecting and locating human footsteps, on one of the floors, in real-time.
|
163 |
The iLog methodology for fostering valid and reliable Big Thick DataBusso, Matteo 29 April 2024 (has links)
Nowadays, the apparent promise of Big Data is that of being able to understand in real-time people's behavior in their daily lives. However, as big as these data are, many useful variables describing the person's context (e.g., where she is, with whom she is, what she is doing, and her feelings and emotions) are still unavailable. Therefore, people are, at best, thinly described. A former solution is to collect Big Thick Data via blending techniques, combining sensor data sources with high-quality ethnographic data, to generate a dense representation of the person's context. As attractive as the proposal is, the approach is difficult to integrate into research paradigms dealing with Big Data, given the high cost of data collection, integration, and the expertise needed to manage them. Starting from a quantified approach to Big Thick Data, based on the notion of situational context, this thesis proposes a methodology, to design, collect, and prepare reliable and valid quantified Big Thick Data for the purposes of their reuse. Furthermore, the methodology is supported by a set of services to foster its replicability. The methodology has been applied in 4 case studies involving many domain experts and 10,000+ participants from 10 countries. The diverse applications of the methodology and the reuse of the data for multiple applications demonstrate its inner validity and reliability.
|
164 |
Driving Innovation through Big Open Linked Data (BOLD): Exploring Antecedents using Interpretive Structural ModellingDwivedi, Y.K., Janssen, M., Slade, E.L., Rana, Nripendra P., Weerakkody, Vishanth J.P., Millard, J., Hidders, J., Snijders, D. 07 2016 (has links)
Yes / Innovation is vital to find new solutions to problems, increase quality, and improve profitability. Big open linked data (BOLD) is a fledgling and rapidly evolving field that creates new opportunities for innovation. However, none of the existing literature has yet considered the interrelationships between antecedents of innovation through BOLD. This research contributes to knowledge building through utilising interpretive structural modelling to organise nineteen factors linked to innovation using BOLD identified by experts in the field. The findings show that almost all the variables fall within the linkage cluster, thus having high driving and dependence powers, demonstrating the volatility of the process. It was also found that technical infrastructure, data quality, and external pressure form the fundamental foundations for innovation through BOLD. Deriving a framework to encourage and manage innovation through BOLD offers important theoretical and practical contributions.
|
165 |
A note on intelligent exploration of semantic dataThakker, Dhaval, Schwabe, D., Garcia, D., Kozaki, K., Brambilla, M., Dimitrova, V. 15 July 2019 (has links)
Yes / Welcome to this special issue of the Semantic Web (SWJ) journal. The special issue compiles three technical contributions
that significantly advance the state-of-the-art in exploration of semantic data using semantic web techniques and technologies.
|
166 |
Big data in predictive toxicology / Big Data in Predictive ToxicologyNeagu, Daniel, Richarz, A-N. 15 January 2020 (has links)
No / The rate at which toxicological data is generated is continually becoming more rapid and the volume of data generated is growing dramatically. This is due in part to advances in software solutions and cheminformatics approaches which increase the availability of open data from chemical, biological and toxicological and high throughput screening resources. However, the amplified pace and capacity of data generation achieved by these novel techniques presents challenges for organising and analysing data output.
Big Data in Predictive Toxicology discusses these challenges as well as the opportunities of new techniques encountered in data science. It addresses the nature of toxicological big data, their storage, analysis and interpretation. It also details how these data can be applied in toxicity prediction, modelling and risk assessment.
|
167 |
Komplexní řízení kvality dat a informací / Towards Complex Data and Information Quality ManagementPejčoch, David January 2010 (has links)
This work deals with the issue of Data and Information Quality. It critically assesses the current state of knowledge within tvarious methods used for Data Quality Assessment and Data (Information) Quality improvement. It proposes new principles where this critical assessment revealed some gaps. The main idea of this work is the concept of Data and Information Quality Management across the entire universe of data. This universe represents all data sources which respective subject comes into contact with and which are used under its existing or planned processes. For all these data sources this approach considers setting the consistent set of rules, policies and principles with respect to current and potential benefits of these resources and also taking into account the potential risks of their use. An imaginary red thread that runs through the text, the importance of additional knowledge within a process of Data (Information) Quality Management. The introduction of a knowledge base oriented to support the Data (Information) Quality Management (QKB) is therefore one of the fundamental principles of the author proposed a set of best
|
168 |
Big Data Governance / Big Data GovernanceBlahová, Leontýna January 2016 (has links)
This master thesis is about Big Data Governance and about software, which is used for this purposes. Because Big Data are huge opportunity and also risk, I wanted to map products which can be easily use for Data Quality and Big Data Governance in one platform. This thesis is not only on theoretical knowledge level, but also evaluates five key products (from my point of view). I defined requirements for every kind of domain and then I set up the weights and points. The main objective is to evaluate software capabilities and compere them.
|
169 |
DATA COMPRESSION STATISTICS AND IMPLICATIONSHoran, Sheila 10 1900 (has links)
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada / Bandwidth is a precious commodity. In order to make the best use of what is available, better modulation schemes need to be developed, or less data needs to be sent. This paper will investigate the option of sending less data via data compression. The structure and the entropy of the data determine how much lossless compression can be obtained for a given set of data. This paper shows the data structure and entropy for several actual telemetry data sets and the resulting lossless compression obtainable using data compression techniques.
|
170 |
Merging of Diverse Encrypted PCM StreamsDuffy, Harold A. 10 1900 (has links)
International Telemetering Conference Proceedings / October 28-31, 1996 / Town and Country Hotel and Convention Center, San Diego, California / The emergence of encrypted PCM as a standard within DOD makes possible the correction of time skews between diverse data sources. Time alignment of data streams can be accomplished before decryption and so is independent of specific format. Data quality assessment in order to do a best-source selection remains problematic, but workable.
|
Page generated in 0.0959 seconds