• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 2
  • Tagged with
  • 6
  • 6
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Statistika a její využití (zneužití) při tvorbě informací / Statistics and its Utilization (Abuse) in Gaining Information

Kafková, Pavla January 2011 (has links)
This diploma thesis is focused on possibility of influencing public opinion through appropriate modification of information or conclusions arising from the statistical data. The thesis is not classically divided into two parts but into five smaller chapters, which are connected to each other. Absence of practical part is caused by examples are contained in each chapter. Chapter with the practise examples is in the conclusion of the thesis. The examples are commonly available on the Internet and most people do not pay attention to them.
2

Data measures that characterise classification problems

Van der Walt, Christiaan Maarten 29 August 2008 (has links)
We have a wide-range of classifiers today that are employed in numerous applications, from credit scoring to speech-processing, with great technical and commercial success. No classifier, however, exists that will outperform all other classifiers on all classification tasks, and the process of classifier selection is still mainly one of trial and error. The optimal classifier for a classification task is determined by the characteristics of the data set employed; understanding the relationship between data characteristics and the performance of classifiers is therefore crucial to the process of classifier selection. Empirical and theoretical approaches have been employed in the literature to define this relationship. None of these approaches have, however, been very successful in accurately predicting or explaining classifier performance on real-world data. We use theoretical properties of classifiers to identify data characteristics that influence classifier performance; these data properties guide us in the development of measures that describe the relationship between data characteristics and classifier performance. We employ these data measures on real-world and artificial data to construct a meta-classification system. We use theoretical properties of classifiers to identify data characteristics that influence classifier performance; these data properties guide us in the development of measures that describe the relationship between data characteristics and classifier performance. We employ these data measures on real-world and artificial data to construct a meta-classification system. The purpose of this meta-classifier is two-fold: (1) to predict the classification performance of real-world classification tasks, and (2) to explain these predictions in order to gain insight into the properties of real-world data. We show that these data measures can be employed successfully to predict the classification performance of real-world data sets; these predictions are accurate in some instances but there is still unpredictable behaviour in other instances. We illustrate that these data measures can give valuable insight into the properties and data structures of real-world data; these insights are extremely valuable for high-dimensional classification problems. / Dissertation (MEng)--University of Pretoria, 2008. / Electrical, Electronic and Computer Engineering / unrestricted
3

Komplexní řízení kvality dat a informací / Towards Complex Data and Information Quality Management

Pejčoch, David January 2010 (has links)
This work deals with the issue of Data and Information Quality. It critically assesses the current state of knowledge within tvarious methods used for Data Quality Assessment and Data (Information) Quality improvement. It proposes new principles where this critical assessment revealed some gaps. The main idea of this work is the concept of Data and Information Quality Management across the entire universe of data. This universe represents all data sources which respective subject comes into contact with and which are used under its existing or planned processes. For all these data sources this approach considers setting the consistent set of rules, policies and principles with respect to current and potential benefits of these resources and also taking into account the potential risks of their use. An imaginary red thread that runs through the text, the importance of additional knowledge within a process of Data (Information) Quality Management. The introduction of a knowledge base oriented to support the Data (Information) Quality Management (QKB) is therefore one of the fundamental principles of the author proposed a set of best
4

Big Data usage in the Maritime industry : A Qualitative Study for the use of Port State Control (PSC) inspection data by shipping professionals

Ampatzidis, Dimitrios January 2021 (has links)
Vessels during their calls on ports is possible to have an inspection from the local Port State Control (PSC) authorities regarding their implementation of International Maritime Organization guidelines for safety and security. This qualitative study focuses on how shipping professionals understand and use Big Data in the PSC inspection databases, what characteristics they recognize these data should have, what value they attach to those big data, and how they use them to support the decision-making process within their organizations. This study conducted interviews with shipping professionals, collected their perspectives, and analyzed their sayings with Thematic Analysis to reach the study's outcome. Many researchers have been discussed Big Data characteristics and the value an organization or a researcher could have from Big Data and Analytics. However, there is no universally accepted theory regarding Big Data characteristics and the value for the database users. The research concluded that Big Data from the PSC inspections procedures provides valid and helpful information that broadens professionals' understanding of inspection control and safety need, through this, it is possible to upscale their internal operations and their decision-making procedures as long as these data are characterized by volume, velocity, veracity, and complexity.
5

Big Maritime Data: The promises and perils of the Automatic Identification System : Shipowners and operators’ perceptions

Kouvaras, Andreas January 2022 (has links)
The term Big Data has been gaining importance both at the academic and at the business level. Information technology plays a critical role in shipping since there is a high demand for fast transfer and communication between the parts of a shipping contract. The development of Automatic Identification System (AIS) is intended to improve maritime safety by tracking the vessels and exchange inter-ship information.  This master’s thesis purpose was to a) investigate in which business decisions the Automatic Identification System helps the shipowners and operators (i.e., users), b) find the benefits and perils arisen from its use, and c) investigate the possible improvements based on the users’ perceptions. This master’s thesis is a qualitative study using the interpretivism paradigm. Data were collected through semi-structured interviews. A total of 6 people participated with the following criteria: a) position on technical department or DPA or shipowner, b) participating on business decisions, c) shipping company owns a fleet, and d) deals with AIS data. The Thematic Analysis led to twenty-six codes, twelve categories and five concepts. Empirical findings showed that AIS data mostly contributes to make strategic business decisions. Participants are interested in using AIS data to measure the efficiency of their fleet and ports, to estimate the fuel consumption, to reduce their costs, to protect the environment and people’s health, to analyze the trade market, to predict the time of arrival, the optimal route and speed, to maintain highest security levels and to reduce the inaccuracies due to manual input of some AIS attributes. Finally, participants mentioned some AIS challenges including technological improvement (e.g., transponders, antennas) as well as the operation of autonomous vessels.  Finally, this master’s thesis contributes using the prescriptive and descriptive theories and help stakeholders to find new decisions while researchers and developers to advance their products.
6

INTEGRATING CONNECTED VEHICLE DATA FOR OPERATIONAL DECISION MAKING

Rahul Suryakant Sakhare (9320111) 26 April 2023 (has links)
<p>  </p> <p>Advancements in technology have propelled the availability of enriched and more frequent information about traffic conditions as well as the external factors that impact traffic such as weather, emergency response etc. Most newer vehicles are equipped with sensors that transmit their data back to the original equipment manufacturer (OEM) at near real-time fidelity. A growing number of such connected vehicles (CV) and the advent of third-party data collectors from various OEMs have made big data for traffic commercially available for use. Agencies maintaining and managing surface transportation are presented with opportunities to leverage such big data for efficiency gains. The focus of this dissertation is enhancing the use of CV data and applications derived from fusing it with other datasets to extract meaningful information that will aid agencies in data driven efficient decision making to improve network wide mobility and safety performance.   </p> <p>One of the primary concerns of CV data for agencies is data sampling, particularly during low-volume overnight hours. An evaluation of over 3 billion CV records in May 2022 in Indiana has shown an overall CV penetration rate of 6.3% on interstates and 5.3% on non-interstate roadways. Fusion of CV traffic speeds with precipitation intensity from NOAA’s High-Resolution Rapid-Refresh (HRRR) data over 42 unique rainy days has shown reduction in the average traffic speed by approximately 8.4% during conditions classified as very heavy rain compared to no rain. </p> <p>Both aggregate analysis and disaggregate analysis performed during this study enables agencies and automobile manufacturers to effectively answer the often-asked question of what rain intensity it takes to begin impacting traffic speeds. Proactive measures such as providing advance warnings that improve the situational awareness of motorists and enhance roadway safety should be considered during very heavy rain periods, wind events, and low daylight conditions.</p> <p>Scalable methodologies that can be used to systematically analyze hard braking and speed data were also developed. This study demonstrated both quantitatively and qualitatively how CV data provides an opportunity for near real-time assessment of work zone operations using metrics such as congestion, location-based speed profiles and hard braking. The availability of data across different states and ease of scalability makes the methodology implementable on a state or national basis for tracking any highway work zone with little to no infrastructure investment. These techniques can provide a nationwide opportunity in assessing the current guidelines and giving feedback in updating the design procedures to improve the consistency and safety of construction work zones on a national level.  </p> <p>CV data was also used to evaluate the impact of queue warning trucks sending digital alerts. Hard-braking events were found to decrease by approximately 80% when queue warning trucks were used to alert motorists of impending queues analyzed from 370 hours of queueing with queue trucks present and 58 hours of queueing without the queue trucks present, thus improving work zone safety. </p> <p>Emerging opportunities to identify and measure traffic shock waves and their forming or recovery speed anywhere across a roadway network are provided due to the ubiquity of the CV data providers. A methodology for identifying different shock waves was presented, and among the various case studies found typical backward forming shock wave speeds ranged from 1.75 to 11.76 mph whereas the backward recovery shock wave speeds were between 5.78 to 16.54 mph. The significance of this is illustrated with a case study of  a secondary crash that suggested  accelerating the clearance by 9 minutes could have prevented the secondary crash incident occurring at the back of the queue. Such capability of identifying and measuring shock wave speeds can be utilized by various stakeholders for traffic management decision-making that provide a holistic perspective on the importance of both on scene risk as well as the risk at the back of the queue. Near real-time estimation of shock waves using CV data can recommend travel time prediction models and serve as input variables to navigation systems to identify alternate route choice opportunities ahead of a driver’s time of arrival.   </p> <p>The overall contribution of this thesis is developing scalable methodologies and evaluation techniques to extract valuable information from CV data that aids agencies in operational decision making.</p>

Page generated in 0.1178 seconds