• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 59
  • 17
  • 9
  • 6
  • 5
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 114
  • 18
  • 17
  • 16
  • 14
  • 11
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Integration of petrographic and petrophysical logs analyses to characterize and assess reservoir quality of the lower cretaceous sediments in the Orange basin, offshore south africa

Mugivhi, Murendeni Hadley January 2017 (has links)
Magister Scientiae - MSc / Commercial hydrocarbon production relies on porosity and permeability that defines the storage capacity and flow capacity of the resevoir. To assess these parameters, petrographic and petrophysical log analyses has been found as one of the most powerful approach. The approach has become ideal in determining reservoir quality of uncored reservoirs following regression technique. It is upon this background that a need arises to integrate petrographic and petrophysical well data from the study area. Thus, this project gives first hand information about the reservoir quality for hydrocarbon producibility. Five wells (A-J1, A-D1, A-H1, A-K1 and K-A2) were studied within the Orange Basin, Offshore South Africa and thirty five (35) reservoirs were defined on gamma ray log where sandstone thickness is greater than 10m. Eighty three (83) sandstone samples were gathered from these reservoirs for petrographic analyses within Hauterevian to Cenomanian sequences. Thin section analyses of these sediments revealed pore restriction by quartz and feldspar overgrowths and pore filling by siderite, pyrite, kaolinite, illite, chlorite and calcite. These diagenetic minerals occurrence has distructed intergranular pore space to almost no point count porosity in well K-A2 whilst in A-J1, A-D1, A-H1 and A-K1 wells porosity increases at some zones due to secondary porosity. Volume of clay, porosity, permeability, water saturation, storage capacity, flow capacity and hydrocarbon volume were calculated within the pay sand interval. The average volume of clay ranged from 6% to 70.5%. The estimated average effective porosity ranged from 10% to 20%. The average water saturation ranged from 21.7% to 53.4%. Permeability ranged from a negligible value to 411.05mD. Storage capacity ranged from 6.56 scf to 2228.17 scf. Flow capacity ranged from 1.70 mD-ft to 31615.82 mD-ft. Hydrocarbon volume varied from 2397.7 cubic feet to 6215.4 cubic feet. Good to very good reservoir qualities were observed in some zones of well A-J1, A-K1 and A-H1 whereas well A-D1 and K-A2 presented poor qualities.
52

Integrated approach to solving reservoir problems and evaluations using sequence stratigraphy, geological structures and diagenesis in Orange Basin, South Africa

Adekola, Solomon Adeniyi January 2010 (has links)
Philosophiae Doctor - PhD / Sandstone and shale samples were selected within the systems tracts for laboratory analyses. The sidewall and core samples were subjected to petrographic thin section analysis, mineralogical analyses which include x-ray diffraction (XRD), scanning electron microscopy (SEM), energy dispersive spectroscopy (EDS), and stable carbon and oxygen isotopes geochemistry to determine the diagenetic alteration at deposition and post deposition in the basin. The shale samples were subjected to Rock-Eval pyrolysis and accelerated solvent extraction (ASE) prior to gas chromatographic (GC) and gas chromatographic-mass spectrometric (GC-MS) analyses of the rock extracts, in order to determine the provenance, type and thermal maturity of organic matter present in sediments of the Orange Basin. The results revealed a complex diagenetic history of sandstones in this basin, which includes compaction, cementation/micritization, dissolution, silicification/overgrowth of quartz, and fracturing. The Eh-pH shows that the cements in the area of the basin under investigation were precipitated under weak acidic and slightly alkaline conditions. The δ18O isotope values range from -1.648 to 10.054 %, -1.574 to 13.134 %, and -2.644 to 16.180 % in the LST, TST, and HST, respectively. While δ13C isotope values range from -25.667 to -12.44 %, -27.862 to -6.954% and -27.407 to -19.935 % in the LST, TST, and HST, respectively. The plot of δ18O versus δ13C shows that the sediments were deposited in shallow marine temperate conditions. / South Africa
53

Effects of Journal Writing on Thinking Skills of High School Geometry Students

Linn, Mary McMahon 01 January 1987 (has links)
The purpose of the project was to determine the effects of journal writing on the thinking skills of high school geometry students. The research supports the idea that writing can enhance a student's metacognitive ability. The results show that the journals served effectively in various capacities. Each student became actively involved in his or her own learning process. Writing forced the students to synthesize information and they became aware of what they did and did not know. They recognized their individual learning style and strengths and began to take advantage of those strengths. The journals served as a diagnostic tool for the instructor and they opened lines of communication between teacher and student and personalized the learning environment. The results of the project suggest that this type of journal keeping would be effective in all disciplines but it is especially recommended that it be implemented throughout a mathematics department.
54

Using Event logs and Rapid Ethnographic Data to Mine Clinical Pathways

January 2020 (has links)
abstract: Background: Process mining (PM) using event log files is gaining popularity in healthcare to investigate clinical pathways. But it has many unique challenges. Clinical Pathways (CPs) are often complex and unstructured which results in spaghetti-like models. Moreover, the log files collected from the electronic health record (EHR) often contain noisy and incomplete data. Objective: Based on the traditional process mining technique of using event logs generated by an EHR, observational video data from rapid ethnography (RE) were combined to model, interpret, simplify and validate the perioperative (PeriOp) CPs. Method: The data collection and analysis pipeline consisted of the following steps: (1) Obtain RE data, (2) Obtain EHR event logs, (3) Generate CP from RE data, (4) Identify EHR interfaces and functionalities, (5) Analyze EHR functionalities to identify missing events, (6) Clean and preprocess event logs to remove noise, (7) Use PM to compute CP time metrics, (8) Further remove noise by removing outliers, (9) Mine CP from event logs and (10) Compare CPs resulting from RE and PM. Results: Four provider interviews and 1,917,059 event logs and 877 minutes of video ethnography recording EHRs interaction were collected. When mapping event logs to EHR functionalities, the intraoperative (IntraOp) event logs were more complete (45%) when compared with preoperative (35%) and postoperative (21.5%) event logs. After removing the noise (496 outliers) and calculating the duration of the PeriOp CP, the median was 189 minutes and the standard deviation was 291 minutes. Finally, RE data were analyzed to help identify most clinically relevant event logs and simplify spaghetti-like CPs resulting from PM. Conclusion: The study demonstrated the use of RE to help overcome challenges of automatic discovery of CPs. It also demonstrated that RE data could be used to identify relevant clinical tasks and incomplete data, remove noise (outliers), simplify CPs and validate mined CPs. / Dissertation/Thesis / Masters Thesis Computer Science 2020
55

A Comparison of the Pittsburgh Sleep Quality Index, a New Sleep Questionnaire, and Sleep Diaries

Sethi, Kevin J. 08 1900 (has links)
Self-report retrospective estimates of sleep behaviors are not as accurate as prospective estimates from sleep diaries, but are more practical for epidemiological studies. Therefore, it is important to evaluate the validity of retrospective measures and improve upon them. The current study compared sleep diaries to two self-report retrospective measures of sleep, the commonly used Pittsburgh Sleep Quality Index (PSQI) and a newly developed sleep questionnaire (SQ), which assessed weekday and weekend sleep separately. It was hypothesized that the new measure would be more accurate than the PSQI because it accounts for variability in sleep throughout the week. The relative accuracy of the PSQI and SQ in obtaining estimates of total sleep time (TST), sleep efficiency (SE), and sleep onset latency (SOL) was examined by comparing their mean differences from, and correlations with, estimates obtained by the sleep diaries. Correlations of the PSQI and SQ with the sleep diaries were moderate, with the SQ having significantly stronger correlations on the parameters of TST, SE, and sleep quality ratings. The SQ also had significantly smaller mean differences from sleep diaries on SOL and SE. The overall pattern of results indicated that the SQ performs better than the PSQI when compared to sleep diaries.
56

Goal-oriented Process Mining

Ghasemi, Mahdi 05 January 2022 (has links)
Context: Process mining is an approach that exploits event logs to discover real processes executed in organizations, enabling them to (re)design and improve process models. Goal modelling, on the other hand, is a requirements engineering (RE) approach mainly used to analyze what-if situations and support decision making. Problem: Common problems with process mining include the complexity of discovered “spaghetti” processes and a lack of goal-process alignment. Current process mining practices mainly focus on activities and do not benefit from considering stakeholder goals and requirements to manage complexity and alignment. The critical artifact that process mining practices rely on is the event log. However, using a raw version of real-life event logs will typically result in process models being too complex, unstructured, difficult to understand and, above all, not aligned with stakeholders’ goals. Method: Involving goal-related factors can augment the precision and interpretability of mined models and help discover better opportunities to satisfy stakeholders. This thesis proposes three algorithms for goal-oriented process enhancement and discovery (GoPED) that show synergetic effects achievable by combining process mining and goal-oriented modelling. With GoPED, good historical experiences will be found within the event log to be used as a basis for inferring good process models, and bad experiences will be found to discover models to avoid. The goodness is defined in terms of alignment with regards to three categories of goal-related criteria: • Case perspective: satisfaction of individual cases (e.g., patient, costumer) in terms of some goals; • Goal perspective: overall satisfaction of some goals (e.g., to decrease waiting time) rather than individual cases; and • Organization perspective: a comprehensive satisfaction level for all goals over all cases. GoPED first adds goal-related attributes to conventional event characteristics (case identifier, activities, and timestamps), selects a subset of cases concerning goal-related criteria, and finally discovers a process model from that subset. For each criterion, an algorithm is developed to enable selecting the best subset of cases where the criterion holds. The resulting process models are expected to reproduce the desired level of satisfaction. The three GoPED algorithms were implemented in a Python tool. In addition, three other tools were implemented to complete a line of actions whose input is a raw event log and output is a subset of the event log selected with respect to the goal-related criteria. GoPED was used on real healthcare event logs (an illustrative example and a case study) to discover processes, and the performance of the tools was also assessed. Results: The performance of the GoPED toolset for various sizes and configurations of event logs was assessed through extensive experiments. The results show that the three GoPED algorithms are practical and scalable for application to event logs with realistic sizes and types of configurations. The GoPED method was also applied to the discovery of processes from the raw event log of the trajectories of patients with sepsis in a Dutch hospital, from their registration in the emergency room until their discharge. Although the raw data does not explicitly include goal-related information, some reasonable goals were derived from the data and a related research paper in consultation with a healthcare expert. The method was applied, and the resulting models were i) substantially simpler than the model dis-covered from the whole event log, ii) free from the drawbacks that using the whole event log causes, and iii) aligned with the predefined goals. Conclusion: GoPED demonstrates the benefits of exploiting goal modelling capabilities to enhance event logs and select a subset of events to discover goal-aligned and simplified process models. The resulting process model can also be compared to a model discovered from the original event log to reveal new insights about the ability of different forms of process models to satisfy the stakeholders’ goals. Learning from good behaviours that satisfy goals and detecting bad behaviours that hurt them is an opportunity to redesign models, so they are simpler, better aligned with goals, and free from drawbacks that using the whole event log may cause.
57

Comparison of adversary emulation tools for reproducing behavior in cyber attacks / : Jämförelse av verktyg för motståndaremulering vid återskapande av beteenden i cyberattacker

Elgh, Joakim January 2022 (has links)
As cyber criminals can find many different ways of gaining unauthorized access to systems without being detected, it is of high importance for organizations to monitor what is happening inside their systems. Adversary emulation is a way to mimic behavior of advanced adversaries within cyber security, which can be used to test detection capabilities of malicious behavior within a system of an organization. The emulated behavior can be based on what have been observed in real cyber attacks - open source knowledge bases such as MITRE ATT&CK collect this kind of intelligence. Many organizations have in recent years developed tools to simplify emulating the behavior of known adversaries. These tools are referred to as adversary emulation tools in this thesis. The purpose of this thesis was to evaluate how noisy different adversary emulation tools are. This was done through measurements on the amount of event logs generated by Sysmon when performing emulations against a Windows system. The goal was to find out which tool was the least noisy. The different adversary emulation tools included in this thesis were Invoke-AtomicRedTeam, CALDERA, ATTPwn and Red Team Automation. To make sure the correlation between the adversary emulation tools and the generated event logs could be identified, a controlled experiment was selected as the method for the study. Five experiments were designed including one emulation scenario each, executed by the different adversary emulation tools included in each experiment. After each emulation, event logs were collected, filtered, and measured for use in the comparison. Three experiments were conducted which compared Invoke-AtomicRedTeam, CALDERA, and a manual emulation. The results of the first three experiments indicated that Invoke-AtomicRedTeam team was the noisiest, followed by CALDERA, and the manual emulation was the least noisy. On average, the manual emulation generated 83,9% fewer logs than Invoke-AtomicRedTeam and 78,4% fewer logs than CALDERA in experiments 1-3. A fourth experiment compared Red Team Automation and Invoke-AtomicRedTeam, where Red Team Automation was the least noisy tool. The final fifth experiment compared ATTPwn and CALDERA, and the results indicated that these were similarly noisy but in different ways. It was also concluded that a main difference between the adversary emulation tools was that the number of techniques available differed between the tools which could limit the ability to emulate the behavior of real adversaries. However, as the emulation tools were implemented in different ways, this thesis could be one starting point for future development of silent adversary emulation tools or to assist in selecting an existing adversary emulation tool.
58

The Kaskaskia-Absaroka Boundary in the Subsurface of Athens County, Ohio

Stobart, Ryan Patrick January 2019 (has links)
No description available.
59

Evaluating machine learning strategies for classification of large-scale Kubernetes cluster logs

Sarika, Pawan January 2022 (has links)
Kubernetes is a free, open-source container orchestration system for deploying and managing Docker containers that host microservices. Its cluster logs are extremely helpful in determining the root cause of a failure. However, as systems become more complex, locating failures becomes more difficult and time-consuming. This study aims to identify the classification algorithms that accurately classify the given log data and, at the same time, require fewer computational resources. Because the data is quite large, we begin with expert-based feature selection to reduce the data size. Following that, TF-IDF feature extraction is performed, and finally, we compare five classification algorithms, SVM, KNN, random forest, gradient boosting and MLP using several metrics. The results show that Random forest produces good accuracy while requiring fewer computational resources compared to other algorithms.
60

Detection Likelihood Maps for Wilderness Search and Rescue: Assisting Search by Utilizing Searcher GPS Track Logs

Roscheck, Michael Thomas 03 July 2012 (has links) (PDF)
Every year there are numerous cases of individuals becoming lost in remote wilderness environments. Principles of search theory have become a foundation for developing more efficient and successful search and rescue methods. Measurements can be taken that describe how easily a search object is to detect. These estimates allow the calculation of the probability of detection—the probability that an object would have been detected if in the area. This value only provides information about the search area as a whole; it does not provide details about which portions were searched more thoroughly than others. Ground searchers often carry portable GPS devices and their resulting GPS track logs have recently been used to fill in part of this knowledge gap. We created a system that provides a detection likelihood map that estimates the probability that each point in a search area was seen well enough to detect the search object if it was there. This map will be used to aid ground searchers as they search an assigned area, providing real time feedback of what has been "seen." The maps will also assist incident commanders as they assess previous searches and plan future ones by providing more detail than is available by viewing GPS track logs.

Page generated in 0.0688 seconds