• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 115
  • 65
  • 48
  • 41
  • 18
  • 16
  • 14
  • 10
  • 10
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 395
  • 50
  • 46
  • 41
  • 34
  • 32
  • 32
  • 31
  • 28
  • 25
  • 25
  • 25
  • 24
  • 24
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Efficient Isolation Enabled Role-Based Access Control for Database Systems

Helal, Mohammad Rahat January 2017 (has links)
No description available.
152

Hardware encryption of AES algorithm on Android platform

Joshi, Yogesh 08 October 2012 (has links)
No description available.
153

Scheduling Memory Transactions in Distributed Systems

Kim, Junwhan 15 October 2013 (has links)
Distributed transactional memory (DTM) is an emerging, alternative concurrency control model that promises to alleviate the difficulties of lock-based distributed synchronization. In DTM, transactional conflicts are traditionally resolved by a contention manager. A complementary approach for handling conflicts is through a transactional scheduler, which orders transactional requests to avoid or minimize conflicts. We present a suite of transactional schedulers: Bi-interval, Commutative Requests First (CRF), Reactive Transactional Scheduler (RTS), Dependency-Aware Transactional Scheduler} (DATS), Scheduling-based Parallel Nesting} (SPN), Cluster-based Transactional Scheduler} (CTS), and Locality-aware Transactional Scheduler} (LTS). The schedulers consider Herlihy and Sun's dataflow execution model, where transactions are immobile and objects are migrated to invoking transactions, relying on directory-based cache-coherence protocols to locate and move objects. Within this execution model, the proposed schedulers target different DTM models. Bi-interval considers the single object copy DTM model, and categorizes concurrent requests into read and write intervals to maximize the concurrency of read transactions. This allows an object to be simultaneously sent to read transactions, improving transactional makespan. We show that Bi-interval improves the makespan competitive ratio of DTM without such a scheduler to O(log(N)) for the worst-case and (log(N - k) for the average-case, for N nodes and k read transactions. Our implementation reveals that Bi-interval enhances transactional throughput over the no-scheduler case by as much as 1.71x, on average. CRF considers multi-versioned DTM. Traditional multi-versioned TM models use multiple object versions to guarantee commits of read transactions, but limit concurrency of write transactions. CRF relies on the notion of commutative transactions, i.e., those that ensure consistency of the shared data-set even when they are validated and committed concurrently. CRF detects conflicts between commutative and non-commutative write transactions and then schedules them according to the execution state, enhancing the concurrency of write transactions. Our implementation shows that transactional throughput is improved by up to 5x over a state-of-the-art competitor (DecentSTM). RTS and DATS consider transactional nesting in DTM, and focus on the closed and open nesting models, respectively. RTS determines whether a conflicting outer transaction must be aborted or enqueued according to the level of contention. If a transaction is enqueued, its closed-nested transactions do not have to retrieve objects again, resulting in reduced communication delays. DATS's goal is to boost the throughput of open-nested transactions by reducing the overhead of running expensive compensating actions and acquiring/releasing abstract locks when the outer transaction aborts. The contribution of DATS is twofold. First, it allows commutable outer transactions to be validated concurrently and allows non-commutable outer transactions -- depending on their inner transactions -- to be committed before others without dependencies. Implementations reveal effectiveness: RTS and DATS improve throughput (over the no-scheduler case), by as much as 1.88x and 2.2x, respectively. SPN considers parallel nested transactions in DTM. The idea of parallel nesting is to execute the inner transactions that access different objects concurrently, and execute the inner transactions that access the same objects serially, increasing performance. However, the parallel nesting model may be ineffective if all inner transactions access the same object due to the additional overheads needed to identify both types of inner transactions. SPN avoids this overhead and allows inner transactions to request objects and to execute them in parallel. Implementations reveal that SPN outperforms non-parallel nesting (i.e., closed nesting) by up to 3.5x and 4.5x on a micro-benchmark (bank) and the TPC-C transactional benchmark, respectively. CTS considers the replicated DTM model: object replicas are distributed across clusters of nodes, where clusters are determined based on inter-node distance, to maximize locality and fault-tolerance, and to minimize memory usage and communication overhead. CTS enqueues transactions that are aborted due to early validation over clusters and assigns their backoff times, reducing communication overhead. Implementation reveals that CTS improves throughput over competitor replicated DTM solutions including GenRSTM and DecentSTM by as much as 1.64x, on average. LTS considers the genuine partial replicated DTM model. In this model, LTS exploits locality by: 1) employing a transaction scheduler, which enables/disables object ownership changes depending on workload fluctuations, and 2) splitting hot-spot objects into multiple replicas for reducing contention. Our implementation reveals that LTS outperforms state-of-the-art competitors (Score and CTS) by up to 2.6x on micro-benchmarks (Linked List and Skip List) and by up to 2.2x on TPC-C. / Ph. D.
154

An Examination of Elementary Learners' Transactions with Diverse Children's Books

Tackett, Mary Elizabeth 24 June 2016 (has links)
This study was designed to explore the transactional relationship between young learners and diverse texts. Students' perceptions toward difference are shaped by prior, lived experiences, and books provide students with virtual experiences of diversity, which can lead to transformative possibilities. This study explored: (1) How can children's picture books about autism be used to create transformative opportunities in an elementary classroom, and (2) What types of responses do primary students have when transacting with children's picture books about autism? Through the use of a formative experiment methodology aligned with Rosenblatt's Transactional Theory of the Literary Work (1978), interventions involving (a) a teacher read aloud, (b) student journal writing, and (c) class discussion allowed second grade students to transact both aesthetically and efferently with diverse texts about autism. Examination of student responses was a qualitative, iterative process that utilized the Constant Comparative method (Strauss and Corbin, 1998), and intervention data was triangulated with researcher field notes and pre and post-intervention student interviews. Analysis led to a deeper understanding of transactional response, including how (a) increasing awareness cultivates deeper connections with diverse texts, (b) prior perceptions and experiences influence evocation and response, and (c) diverse texts provide necessary virtual experiences with diversity. Student responses during transaction also revealed a process of growth in which students oscillated between various levels of introspection by (a) gaining awareness though an insightful view of diversity (developing understanding of difference/defining and explaining autism), (b) reflecting on similarities to gain an understanding of difference (journeying through the text), and (c) using texts as a reflexive tool and gateway toward acceptance (affirming care and responsibility). This study gives insight into how transacting with diverse texts can provide students with opportunities to explore diversity and increase their knowledge and understanding of difference in order to create a more accepting and equitable culture. / Ph. D.
155

Dialogue Journals: Literacy Transactions of Fourth-Grade Students

Sigmon, Miranda Lee 05 May 2016 (has links)
This study was designed to explore written responses of dialogue journals in a fourth-grade social studies classroom to better understand individuals' meaning-making responses during content-based lessons. The Transactional Theory of Literacy acknowledges that readers generate individualized experiences as they transact with literacy. Although Rosenblatt focused explicitly on the transactions readers make with text, this study expands the idea of these transactions to the more current, unbounded definition of text. Writing could be the tool used for students to record these transactions that lead to their continuously changing, individualized understandings. Through journals, students conversed with one another using written dialogue in the continued generation or restructuring of existing understandings in response to exposure of a content-specific text. The following research questions were addressed in the study: How do written responses of fourth-grade students made in dialogue journals express students' understandings of content-based lessons? 2) To what extent do dialogue journals motivate students in content-based lessons? Analysis of dialogue journals showed evidence of varying levels of understanding, the effective use of journals as a communication tool, and differences in statement types depending on journal audience and content materials used. The MUSIC Model Inventory (Jones, 2009) used to assess perceptions of motivational constructs related to use of dialogue journals in social studies lessons yielded positive results for all constructs measured. Therefore, the results of the study including word count findings, qualitative journal analysis, and observational files clearly showed evidence of dialogue journals being a motivating way of having students express their understandings of content-based texts. / Ph. D.
156

Improving Performance of Highly-Programmable Concurrent Applications by Leveraging Parallel Nesting and Weaker Isolation Levels

Niles, Duane Francis Jr. 15 July 2015 (has links)
The recent development of multi-core computer architectures has largely affected the creation of everyday applications, requiring the adoption of concurrent programming to significantly utilize the divided processing power of computers. Applications must be split into sections able to execute in parallel, without any of these sections conflicting with one another, thereby necessitating some form of synchronization to be declared. The most commonly used methodology is lock-based synchronization; although, to improve performance the most, developers must typically form complex, low-level implementations for large applications, which can easily create potential errors or hindrances. An abstraction from database systems, known as transactions, is a rising concurrency control design aimed to circumvent the challenges with programmability, composability, and scalability in lock-based synchronization. Transactions execute their operations speculatively and are capable of being restarted (or rolled back) when there exist conflicts between concurrent actions. As such issues can occur later in the lifespans of transactions, entire rollbacks are not that effective for performance. One particular method, known as nesting, was created to counter that drawback. Nesting is the act of enclosing transactions within other transactions, essentially dividing the work into pieces called sub-transactions. These sub-transactions can roll back without affecting the entire main transaction, although general nesting models only allow one sub-transaction to perform work at a time. The first main contribution in this thesis is SPCN, an algorithm that parallelizes nested transactions while automatically processing any potential conflicts that may arise, eliminating the burden of additional processing from the application developers. Two versions of SPCN exist: Strict, which enforces the sub-transactions' work to be made visible in a serialized order; and Relaxed, which allows sub-transactions to distribute their information immediately as they finish (therefore invalidation may occur after-the-fact and must be handled). Despite the additional logic required by SPCN, it outperforms traditional closed nesting by 1.78x at the lowest and 3.78x at the highest in the experiments run. Another method to alter transactional execution and boost performance is to relax the rules of visibility for parallel operations (known as their isolation). Depending on the application, correctness is not broken even if some transactions see external work that may later be undone due to a rollback, or if an object is written while another transaction is using an older instance of its data. With lock-based synchronization, developers would have to explicitly design their application with varying amounts of locks, and different lock organizations or hierarchies, to change the strictness of the execution. With transactional systems, the processing performed by the system itself can be set to utilize different rulings, which can change the performance of an application without requiring it to be largely redesigned. This notion leads to the second contribution in this thesis: AsR, or As-Serializable transactions. Serializability is the general form of isolation or strictness for transactions in many applications. In terms of execution, its definition is equivalent to only one transaction running at a time in a given system. Many transactional systems use their own internal form of locking to create Serializable executions, but it is typically too strict for many applications. AsR transactions allow the internal processing to be relaxed while additional meta-data is used external to the system, without requiring any interaction from the developer or any changes to the given application. AsR transactions offer multiple orders of magnitude more in throughput in highly-contentious scenarios, due to their capability to outlast traditional levels of isolation. / Master of Science
157

How does the Signalling effect of insider transactions differ on the Swedish stock market? : - An analysis of insider transactions on the Nasdaq OMX Stockholm, comparing selling versus buying effects in the Tech and Industrial sectors.

Sandberg, Filip, Sandelin, Filip January 2024 (has links)
Background: In financial markets, decisions to buy or sell securities are highly influenced by the aim of making a profit and avoiding losses. The signals that insider transactions send to external investors can significantly impact those decisions. The signals can differ depending on the type of transaction, within which sector, and the company's size. Purpose: The purpose of this thesis is to investigate whether insider transactions employ a more potent influence when buying or selling stocks. A partial purpose is distinguishing between small- and mid-cap stocks and between the technology and industrial sectors on the Nasdaq Stockholm Stock Exchange. Methodology: A quantitative approach was utilised with the event Study model. Hypotheses were constructed, and statistical tests in STATA were conducted to determine if the results were significant. The insider trading that was analysed took place between 2018-2023. Thirty-one companies are in the industrial sector, and twenty-eight are in the technology sector, with 3601 insider transactions employed. Conclusion: The results showed the existence of signalling effects and the possibility of achieving abnormal returns, especially if shorting when insiders are selling, particularly technology stocks, with the most prominent returns from mid-cap firms. However, the results contradict most previous research proposing that purchase transactions yield higher abnormal returns and have a more substantial signalling effect.
158

Advances in High Performance Computing Through Concurrent Data Structures and Predictive Scheduling

Lamar, Kenneth M 01 January 2024 (has links) (PDF)
Modern High Performance Computing (HPC) systems are made up of thousands of server-grade compute nodes linked through a high-speed network interconnect. Each node has tens or even hundreds of CPU cores each, with counts continuing to grow on newer HPC clusters. This results in a need to make use of millions of cores per cluster. Fully leveraging these resources is difficult. There is an active need to design software that scales and fully utilizes the hardware. In this dissertation, we address this gap with a dual approach, considering both intra-node (single node) and inter-node (across node) concerns. To aid in intra-node performance, we propose two novel concurrent data structures: a transactional vector and a persistent hash map. These designs have broad applicability in any multi-core environment but are particularly useful in HPC, which commonly features many cores per node. For inter-node performance, we propose a metrics-driven approach to improve scheduling quality, using predicted run times to backfill jobs more accurately and aggressively. This is augmented using application input parameters to further improve these run time predictions. Improved scheduling reduces the number of idle nodes in an HPC cluster, maximizing job throughput. We find that our data structures outperform the prior state-of-the-art while offering additional features. Our backfill technique likewise outperforms previous approaches in simulations, and our run time predictions were significantly more accurate than conventional approaches. Code for these works is freely available, and we have plans to deploy these techniques more broadly on real HPC systems in the future.
159

Les affects dans la relation didactique. Une étude exploratoire en classe de sixième / The affects in the didactic relationship. An exploratory study in sixth grade

Jodry, Guy 15 January 2018 (has links)
La place et le rôle des affects dans la relation didactique sont encore peu étudiés au niveau secondaire. Les recherches internationales se font surtout sur de jeunes enfants. Le but de cette recherche est de mieux comprendre comment les professeurs et les élèves trans-agissent dans la relation didactique. Cela nécessite de comprendre comment ses agents, élèves et les professeurs perçoivent, ressentent et comprennent leurs expériences scolaires. Car la question reste difficile : comment les élèves apprennent-ils ? Dans notre collège, des professeurs et des chercheurs se sont associés sur une longue durée pour travailler ensemble dans un collectif qui partage les mêmes valeurs d’éducation. Sur cette base, nous avons mené une recherche sur les événements didactiques et les mouvements affectifs des élèves et des professeurs. Nous documentons le travail de classe par le film et une méthodologie spécifique. Nous montrons d’abord l’importance des affects dans l’expérience scolaire des agents de la relation didactique.Nous apprenons à quoi ils sont sensibles, dans quelles conditions et avec quels effets. Nous concevons l’enseignement comme une action conjointe et nous montrons que les actions des professeurs et des élèves sont entrelacées d’émotions didactiques qui orientent leurs comportements. Les affects didactiques engendrent des dynamiques d’apprentissage positives ou négatives. Et en ce qu’ils permettent aux agents de se comprendre, ceux-ci peuvent exercer leur puissance d’agir ensemble dans le monde social. / The place and role of emotions in the educational relationship are still little studied in high school.International research is mostly done on young children. The purpose of this research is to better understand how teachers and students interact in the didactic relationship. This requires understanding how students and teachers perceive, feel and understand their school experiences. Because the question remains difficult: how do students learn? In our college, professors and researchers have worked together for a long time to work together in a collective that shares the same values of education. On this basis, we conducted research on didactic events and the emotional movements of students and teachers. We document the class work by the film and a specific methodology.We first show the importance of affects in the educational experience of agents of the didactic relationship.We learn what they are sensitive to, under what conditions and with what effects. We conceive teaching and learning as a joint action and we show that the actions of teachers and pupils are intertwined with didactic emotions that guide their behaviors. Didactic affects generate positive or negative learning dynamics. And in that they allow agents to understand each other, they can exercise their power to act together in the social world.
160

Contributions à la validation d'ordonnancement temps réel en présence de transactions sous priorités fixes et EDF

Rahni, Ahmed 05 December 2008 (has links) (PDF)
Un système temps réel critique nécessite une validation temporelle utilisant un test d'ordonnançabilité avant sa mise en œuvre. Cette thèse traite le problème d'ordonnancement des taches à offset (transactions) sur une architecture monoprocesseur, en priorités fixes et en priorités dynamiques. Les méthodes existantes pour un test exact ont une complexité exponentielle et seules existent des méthodes approchées, donc pessimistes, qui sont pseudo-polynomiales. En priorités fixes nous proposons des méthodes pseudo-polynomiales, basées sur l'analyse de temps de réponse qui sont moins pessimistes que les méthodes existantes. Nous présentons quelques propriétés (accumulativité monotonique, dominance de tâches) rendant exacte les méthodes d'analyse approchées pour certains cas de systèmes, et optimisant le temps de calcul. En priorités dynamiques, nous proposons un test d'ordonnançabilité exact avec une complexité pseudo-polynomiale. Ce test est basé sur l'analyse de la demande processeur. Les qualités des résultats de nos méthodes sont confirmées par des évaluations expérimentales.

Page generated in 0.0841 seconds