• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 17
  • 17
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Effective Randomized Concurrency Testing with Partial Order Methods

Yuan, Xinhao January 2020 (has links)
Modern software systems have been pervasively concurrent to utilize parallel hardware and perform asynchronous tasks. The correctness of concurrent programming, however, has been challenging for real-world and large systems. As the concurrent events of a system can interleave arbitrarily, unexpected interleavings may lead the system to undefined states, resulting in denials of services, performance degradation, inconsistent data, security issues, etc. To detect such concurrency errors, concurrency testing repeatedly explores the interleavings of a system to find the ones that induce errors. Traditional systematic testing, however, suffers from the intractable number of interleavings due to the complexity in real-world systems. Moreover, each iteration in systematic testing adjusts the explored interleaving with a minimal change that swaps the ordering of two events. Such exploration may waste time in large homogeneous sub-spaces leading to the same testing result. Thus on real-world systems, systematic testing often performs poorly to reveal even simple errors within a limited time budget. On the other hand, randomized testing samples interleavings of the system to quickly surface simple errors with substantial chances, but it may as well explore equivalent interleavings that do not affect the testing results. Such redundancies weaken the probabilistic guarantees and performance of randomized testing to find any errors. Towards effective concurrency testing, this thesis leverages partial order semantics with randomized testing to find errors with strong probabilistic guarantees. First, we propose partial order sampling (POS), a new randomized testing framework to sample interleavings of a concurrent program with a novel partial order method. It effectively and simultaneously explores the orderings of all events of the program, and has high probabilities to manifest any errors of unexpected interleavings. We formally proved that our approach has exponentially better probabilistic guarantees to sample any partial orders of the program than state-of-the-art approaches. Our evaluation over 32 known concurrency errors in public benchmarks shows that our framework performed 2.6 times better than state-of-the-art approaches to find the errors. Secondly, we describe Morpheus, a new practical concurrency testing tool to apply POS to high-level distributed systems in Erlang. Morpheus leverages dynamic analysis to identify and predict critical events to reorder during testing, and significantly improves the exploration effectiveness of POS. We performed a case study to apply Morpheus on four popular distributed systems in Erlang, including Mnesia, the database system in standard Erlang distribution, and RabbitMQ, the message broker service. Morpheus found 11 previously unknown errors leading to unexpected crashes, deadlocks, and inconsistent states, demonstrating the effectiveness and practicalness of our approaches.
12

A transaction execution model for mobile computing environments /

Momin, Kaleem A., January 1999 (has links)
Thesis (M.Sc.), Memorial University of Newfoundland, 2000. / Bibliography: leaves 97-106.
13

Concurrency in modula-2

Sewry, David Andrew 13 March 2013 (has links)
A concurrent program is one in which a number of processes are considered to be active simultaneously . It is possib l e to t hink of a process as being a separate sequential program executing independently of other processes, although perhaps communicating with them at desired pOints . The concurrent program, as a whole, can be executed in one of two ways: il ii) in true concurrent manner, wi th each process executing on a dedicated processor in a quasi - concurrent manner, where a processor's processes . time is multiplexed between single the There are two motivations for the study of concurrency in programming languages : i) concurrent programming facilities can be exploited in systems where one has more t han one processor . As technology i mproves, machines having multiple processors will proliferate ii) concurrent p r ogramming facilities may allow programs to be structured as independent , bu t co - operating, processes which can then be implemented on a single processor system . This structure may be more natural to the programmer then the traditional sequential structures. An example is provided by Conway's - 1- Clearly, languages Pascal) problem [Ben82] . by their very nature, traditional sequential- type (Fortran, Basic, Cobol and earlier versions of prove inadequate for the purposes of concurrent programming without considerable extension (which some manufacturers have provided, rendering their compilers non standard-conforming). The general convenience of high level languages provides strong motivation for their development for rea l time programming. Modula - 2 [Wir83] is but one of a number of such r ecently developed languages, designed not only to fulfil a "sequential" role but also to offer facilities for concurrent programming. Developed by Niklaus Wirth in 1979 as a successor to Pascal and Modula, it is intended to serve under the banner of a generalpurpose systems - implementation language. This thesis investigates concurrency i n Modula - 2 and takes the following form: i ) an analYSis of the concurrent facilities offered ii) problems and difficulties associated with these facilities iii) improveme nts and enhancements, including the feasibility of using Modula - 2 to simulate constructs found in other languages, such as the Hoare monitor [Hoa74] and the Ada rendezvous [Uni81]. - 2- Each section concludes with an appraisal of the work conducted in that section . The final section consists of a critical assessment of those Modula - 2 language constructs and facilities provided for the implementation of concurrency and a brief look at concurrency in Modula, Modula-2's predecessor. - Introduction. / KMBT_363 / Adobe Acrobat 9.53 Paper Capture Plug-in
14

Computer Multitasking in the Classroom: Training to Attend or Wander?

Rogers, Elizabeth A. 28 August 2019 (has links)
No description available.
15

Unified Approaches for Multi-Task Vision-Language Interactions

You, Haoxuan January 2024 (has links)
Vision and Language are two major modalities that humans rely on to perceive the environment and understand the world. Recent advances in Artificial Intelligence (AI) facilitate the development of a variety of vision-language tasks derived from diverse multimodal interactions in daily life, such as image captioning, image-text matching, visual question answering (VQA), text-to-image generation, etc. Despite the remarkable performance, most previous state-of-the-art models are merely specialized for a single vision-language task, which lack generalizability across multiple tasks. Additionally, those specialized models sophisticate the algorithm designs and bring redundancy to model deployment when dealing with complex scenes. In this study, we investigate developing unified approaches capable of solving various vision-language interactions in a multi-task manner. We argue that unified multi-task methods could enjoy several potential advantages: (1) A unified framework for multiple tasks can reduce human efforts in designing different models for different tasks; (2) Reusing and sharing parameters across tasks can improve efficiency; (3) Some tasks may be complementary to other tasks so that multi-tasking can boost the performance; (4) They can deal with the complex tasks that need a joint collaborating of multiple basic tasks and enable new applications. In the first part of this thesis, we explore unified multi-task models with the goal of sharing and reusing as many parameters as possible between different tasks. We started with unifying many vision-language question-answering tasks, such as visual entailment, outside-knowledge VQA, and visual commonsense reasoning, in a simple iterative divide-and-conquer framework. Specifically, it iteratively decomposes the original text question into sub-question, solves each sub-question, and derives the answer to the original question, which can uniformly handle reasoning of various types and semantics levels within one framework. In the next work, we take one step further to unify tasks of image-to-text generation, text-to-image generation, vision-language understanding, and image-text matching all in one single large-scale Transformer-based model. The above two works demonstrate the feasibility, effectiveness and efficiency of sharing the parameters across different tasks in a single model. Nevertheless, they still need to switch between different tasks and can only conduct one task at a time. In the second part of this thesis, we introduce our efforts toward simultaneous multi-task models that can conduct multiple tasks at the same time with a single model. It has additional advantages: the model can learn to perform different tasks or combinations of multiple tasks automatically according to user queries; the joint interaction of tasks can enable new potential applications. We begin with compounding spatial understanding and semantic understanding in a single multimodal Transformer-based model. To enable models to understand and localize local regions, we proposed a hybrid region representation that seamlessly bridges regions with image and text. Coupled with a delicately collected training dataset, our model can perform joint spatial and semantic understanding at the same iteration, and empower a new application: spatial reasoning. Continuing the above project, we further introduce an effective module to encode the high-resolution images, and propose a pre-training method that aligns semantics and spatial understanding in high resolution. Besides, we also couple the Optical Character Recognition (OCR) capability together with spatial understanding in the model and study the techniques to improve the compatibility of various tasks.
16

Performance characteristics of semantics-based concurrency control protocols.

January 1995 (has links)
by Keith, Hang-kwong Mak. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1995. / Includes bibliographical references (leaves 122-127). / Abstract --- p.i / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Background --- p.4 / Chapter 2.1 --- Read/Write Model --- p.4 / Chapter 2.2 --- Abstract Data Type Model --- p.5 / Chapter 2.3 --- Overview of Semantics-Based Concurrency Control Protocols --- p.7 / Chapter 2.4 --- Concurrency Hierarchy --- p.9 / Chapter 2.5 --- Control Flow of the Strict Two Phase Locking Protocol --- p.11 / Chapter 2.5.1 --- Flow of an Operation --- p.12 / Chapter 2.5.2 --- Response Time of a Transaction --- p.13 / Chapter 2.5.3 --- Factors Affecting the Response Time of a Transaction --- p.14 / Chapter 3 --- Semantics-Based Concurrency Control Protocols --- p.16 / Chapter 3.1 --- Strict Two Phase Locking --- p.16 / Chapter 3.2 --- Conflict Relations --- p.17 / Chapter 3.2.1 --- Commutativity (COMM) --- p.17 / Chapter 3.2.2 --- Forward and Right Backward Commutativity --- p.19 / Chapter 3.2.3 --- Exploiting Context-Specific Information --- p.21 / Chapter 3.2.4 --- Relaxing Correctness Criterion by Allowing Bounded Inconsistency --- p.26 / Chapter 4 --- Related Work --- p.32 / Chapter 4.1 --- Exploiting Transaction Semantics --- p.32 / Chapter 4.2 --- Exploting Object Semantics --- p.34 / Chapter 4.3 --- Sacrificing Consistency --- p.35 / Chapter 4.4 --- Other Approaches --- p.37 / Chapter 5 --- Performance Study (Testbed Approach) --- p.39 / Chapter 5.1 --- System Model --- p.39 / Chapter 5.1.1 --- Main Memory Database --- p.39 / Chapter 5.1.2 --- System Configuration --- p.40 / Chapter 5.1.3 --- Execution of Operations --- p.41 / Chapter 5.1.4 --- Recovery --- p.42 / Chapter 5.2 --- Parameter Settings and Performance Metrics --- p.43 / Chapter 6 --- Performance Results and Analysis (Testbed Approach) --- p.46 / Chapter 6.1 --- Read/Write Model vs. Abstract Data Type Model --- p.46 / Chapter 6.2 --- Using Context-Specific Information --- p.52 / Chapter 6.3 --- Role of Conflict Ratio --- p.55 / Chapter 6.4 --- Relaxing the Correctness Criterion --- p.58 / Chapter 6.4.1 --- Overhead and Performance Gain --- p.58 / Chapter 6.4.2 --- Range Queries using Bounded Inconsistency --- p.63 / Chapter 7 --- Performance Study (Simulation Approach) --- p.69 / Chapter 7.1 --- Simulation Model --- p.70 / Chapter 7.1.1 --- Logical Queueing Model --- p.70 / Chapter 7.1.2 --- Physical Queueing Model --- p.71 / Chapter 7.2 --- Experiment Information --- p.74 / Chapter 7.2.1 --- Parameter Settings --- p.74 / Chapter 7.2.2 --- Performance Metrics --- p.75 / Chapter 8 --- Performance Results and Analysis (Simulation Approach) --- p.76 / Chapter 8.1 --- Relaxing Correctness Criterion of Serial Executions --- p.77 / Chapter 8.1.1 --- Impact of Resource Contention --- p.77 / Chapter 8.1.2 --- Impact of Infinite Resources --- p.80 / Chapter 8.1.3 --- Impact of Limited Resources --- p.87 / Chapter 8.1.4 --- Impact of Multiple Resources --- p.89 / Chapter 8.1.5 --- Impact of Transaction Type --- p.95 / Chapter 8.1.6 --- Impact of Concurrency Control Overhead --- p.96 / Chapter 8.2 --- Exploiting Context-Specific Information --- p.98 / Chapter 8.2.1 --- Impact of Limited Resource --- p.98 / Chapter 8.2.2 --- Impact of Infinite and Multiple Resources --- p.101 / Chapter 8.2.3 --- Impact of Transaction Length --- p.106 / Chapter 8.2.4 --- Impact of Buffer Size --- p.108 / Chapter 8.2.5 --- Impact of Concurrency Control Overhead --- p.110 / Chapter 8.3 --- Summary and Discussion --- p.113 / Chapter 8.3.1 --- Summary of Results --- p.113 / Chapter 8.3.2 --- Relaxing Correctness Criterion vs. Exploiting Context-Specific In- formation --- p.114 / Chapter 9 --- Conclusions --- p.116 / Bibliography --- p.122 / Chapter A --- Commutativity Tables for Queue Objects --- p.128 / Chapter B --- Specification of a Queue Object --- p.129 / Chapter C --- Commutativity Tables with Bounded Inconsistency for Queue Objects --- p.132 / Chapter D --- Some Implementation Issues --- p.134 / Chapter D.1 --- Important Data Structures --- p.134 / Chapter D.2 --- Conflict Checking --- p.136 / Chapter D.3 --- Deadlock Detection --- p.137 / Chapter E --- Simulation Results --- p.139 / Chapter E.l --- Impact of Infinite Resources (Bounded Inconsistency) --- p.140 / Chapter E.2 --- Impact of Multiple Resource (Bounded Inconsistency) --- p.141 / Chapter E.3 --- Impact of Transaction Type (Bounded Inconsistency) --- p.142 / Chapter E.4 --- Impact of Concurrency Control Overhead (Bounded Inconsistency) --- p.144 / Chapter E.4.1 --- Infinite Resources --- p.144 / Chapter E.4.2 --- Limited Resource --- p.146 / Chapter E.5 --- Impact of Resource Levels (Exploiting Context-Specific Information) --- p.149 / Chapter E.6 --- Impact of Buffer Size (Exploiting Context-Specific Information) --- p.150 / Chapter E.7 --- Impact of Concurrency Control Overhead (Exploiting Context-Specific In- formation) --- p.155 / Chapter E.7.1 --- Impact of Infinite Resources --- p.155 / Chapter E.7.2 --- Impact of Limited Resources --- p.157 / Chapter E.7.3 --- Impact of Transaction Length --- p.160 / Chapter E.7.4 --- Role of Conflict Ratio --- p.162
17

Detecting and Explaining Emotional Reactions in Personal Narrative

Turcan, Elsbeth January 2024 (has links)
It is no longer any secret that people worldwide are struggling with their mental health, in terms of diagnostic disorders as well as non-diagnostic measures like perceived stress. Barriers to receiving professional mental healthcare are significant, and even in locations where the availability of such care is increasing, our infrastructures are not equipped to find people the support they need. Meanwhile, in a highly-connected digital world, many people turn to outlets like social media to express themselves and their struggles and interact with like-minded others. This setting---where human experts are overwhelmed and human patients are acutely in need---is one in which we believe artificial intelligence (AI) and natural language processing (NLP) systems have great potential to do good. At the same time, we must acknowledge the limitations of our models and strive to deploy them responsibly alongside human experts, such that their logic and mistakes are transparent. We argue that models that make and explain their predictions in ways guided by domain-specific research will be more understandable to humans, who can benefit from the models' statistical knowledge but use their own judgment to mitigate the models' mistakes. In this thesis, we leverage domain expertise in the form of psychology research to develop models for two categories of emotional tasks: identifying emotional reactions in text and explaining the causes of emotional reactions. The first half of the thesis covers our work on detecting emotional reactions, where we focus on a particular, understudied type of emotional reaction: psychological distress. We present our original dataset, Dreaddit, gathered for this problem from the social media website Reddit, as well as some baseline analysis and benchmarking that shows psychological distress detection is a challenging problem. Drawing on literature that connects particular emotions to the experience of distress, we then develop several multitask models that incorporate basic emotion detection, and quantitatively change the way our distress models make their predictions to make them more readily understandable. Then, the second half of the thesis expands our scope to consider not only the emotional reaction being experienced, but also its cause. We treat this cause identification problem first as a span extraction problem in news headlines, where we employ multitask learning (jointly with basic emotion classification) and commonsense reasoning; and then as a free-form generation task in response to a long-form Reddit post, where we leverage the capabilities of large language models (LLMs) and their distilled student models. Here, as well, multitask learning with basic emotion detection is beneficial to cause identification in both settings. Our contributions in this thesis are fourfold. First, we produce a dataset for psychological distress detection, as well as emotion-infused models that incorporate emotion detection for this task. Second, we present multitask and commonsense-infused models for joint emotion detection and emotion cause extraction, showing increased performance on both tasks. Third, we produce a dataset for the new problem of emotion-focused explanation, as well as characterization of the abilities of distilled generation models for this problem. Finally, we take an overarching approach to these problems inspired by psychology theory that incorporates expert knowledge into our models where possible, enhancing explainability and performance.

Page generated in 0.5434 seconds