• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 42
  • 12
  • 7
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 81
  • 17
  • 14
  • 13
  • 12
  • 11
  • 11
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Feature-based graph visualization

Archambault, Daniel William 11 1900 (has links)
A graph consists of a set and a binary relation on that set. Each element of the set is a node of the graph, while each element of the binary relation is an edge of the graph that encodes a relationship between two nodes. Graph are pervasive in many areas of science, engineering, and the social sciences: servers on the Internet are connected, proteins interact in large biological systems, social networks encode the relationships between people, and functions call each other in a program. In these domains, the graphs can become very large, consisting of hundreds of thousands of nodes and millions of edges. Graph drawing approaches endeavour to place these nodes in two or three-dimensional space with the intention of fostering an understanding of the binary relation by a human being examining the image. However, many of these approaches to drawing do not exploit higher-level structures in the graph beyond the nodes and edges. Frequently, these structures can be exploited for drawing. As an example, consider a large computer network where nodes are servers and edges are connections between those servers. If a user would like understand how servers at UBC connect to the rest of the network, a drawing that accentuates the set of nodes representing those servers may be more helpful than an approach where all nodes are drawn in the same way. In a feature-based approach, features are subgraphs exploited for the purposes of drawing. We endeavour to depict not only the binary relation, but the high-level relationships between features. This thesis extensively explores a feature-based approach to graph vi sualization and demonstrates the viability of tools that aid in the visual ization of large graphs. Our contributions lie in presenting and evaluating novel techniques and algorithms for graph visualization. We implement five systems in order to empirically evaluate these techniques and algorithms, comparing them to previous approaches.
2

Fully Distributed Register Files for Heterogeneous Clustered Microarchitectures

Bunchua, Santithorn 09 July 2004 (has links)
Conventional processor design utilizes a central register file and a bypass network to deliver operands to and from functional units, which cannot scale to a large number of functional units. As more functional units are integrated into a processor, the number of ports on a register file grows linearly while area, delay, and energy consumption grow even more rapidly. Physical properties of a bypass network scale in a similar manner. In this dissertation, a fully distributed register file organization is presented to overcome this limitation by relying on small register files with fewer ports and localized operand bypasses. Unlike other clustered microarchitectures, each cluster features a small single-issue functional unit coupled with a small local register file. Several clusters are used, and each of them can be different. All register files are connected through a register transfer network that supports multicast communications. Techniques to support distributed register file operations are presented for both dynamically and statically scheduled processors. These include the eager and multicast register transfer mechanisms in the dynamic approach and the global data routing with multicasting algorithm in the static approach. Although this organizaiton requires additional cycles to execute a program, it is compensated by significant savings obtained through smaller area, faster operand access time, and lower energy consumption. With faster operating frequency and more efficient hardware implementation, overall performance can be improved. Additionally, the fully distributed register file organization is applied to an ILP-SIMD processing element, which is the major building block of a massively parallel media processor array. The results show reduction in die area, which can be utilized to implement additional processing elements. Consequently, performance is improved through a higher degree of data parallelism through a larger processor array. In summary, the fully distributed register file architecture permits future processors to scale to a large number of functional units. This is especially desirable in high-throughput processors such as wide-issue processors and multithreaded processors. Moreover, localized communication is highly desirable in the transition to future deep submicron technologies since long wire is a critical issue in processes with extremely small feature sizes.
3

Feature-based graph visualization

Archambault, Daniel William 11 1900 (has links)
A graph consists of a set and a binary relation on that set. Each element of the set is a node of the graph, while each element of the binary relation is an edge of the graph that encodes a relationship between two nodes. Graph are pervasive in many areas of science, engineering, and the social sciences: servers on the Internet are connected, proteins interact in large biological systems, social networks encode the relationships between people, and functions call each other in a program. In these domains, the graphs can become very large, consisting of hundreds of thousands of nodes and millions of edges. Graph drawing approaches endeavour to place these nodes in two or three-dimensional space with the intention of fostering an understanding of the binary relation by a human being examining the image. However, many of these approaches to drawing do not exploit higher-level structures in the graph beyond the nodes and edges. Frequently, these structures can be exploited for drawing. As an example, consider a large computer network where nodes are servers and edges are connections between those servers. If a user would like understand how servers at UBC connect to the rest of the network, a drawing that accentuates the set of nodes representing those servers may be more helpful than an approach where all nodes are drawn in the same way. In a feature-based approach, features are subgraphs exploited for the purposes of drawing. We endeavour to depict not only the binary relation, but the high-level relationships between features. This thesis extensively explores a feature-based approach to graph vi sualization and demonstrates the viability of tools that aid in the visual ization of large graphs. Our contributions lie in presenting and evaluating novel techniques and algorithms for graph visualization. We implement five systems in order to empirically evaluate these techniques and algorithms, comparing them to previous approaches.
4

Feature-based graph visualization

Archambault, Daniel William 11 1900 (has links)
A graph consists of a set and a binary relation on that set. Each element of the set is a node of the graph, while each element of the binary relation is an edge of the graph that encodes a relationship between two nodes. Graph are pervasive in many areas of science, engineering, and the social sciences: servers on the Internet are connected, proteins interact in large biological systems, social networks encode the relationships between people, and functions call each other in a program. In these domains, the graphs can become very large, consisting of hundreds of thousands of nodes and millions of edges. Graph drawing approaches endeavour to place these nodes in two or three-dimensional space with the intention of fostering an understanding of the binary relation by a human being examining the image. However, many of these approaches to drawing do not exploit higher-level structures in the graph beyond the nodes and edges. Frequently, these structures can be exploited for drawing. As an example, consider a large computer network where nodes are servers and edges are connections between those servers. If a user would like understand how servers at UBC connect to the rest of the network, a drawing that accentuates the set of nodes representing those servers may be more helpful than an approach where all nodes are drawn in the same way. In a feature-based approach, features are subgraphs exploited for the purposes of drawing. We endeavour to depict not only the binary relation, but the high-level relationships between features. This thesis extensively explores a feature-based approach to graph vi sualization and demonstrates the viability of tools that aid in the visual ization of large graphs. Our contributions lie in presenting and evaluating novel techniques and algorithms for graph visualization. We implement five systems in order to empirically evaluate these techniques and algorithms, comparing them to previous approaches. / Science, Faculty of / Computer Science, Department of / Graduate
5

LONGITUDINAL RELATIONSHIPS BETWEEN DEPRESSIVE SYMPTOM CLUSTERS AND INFLAMMATORY BIOMARKERS IMPLICATED IN CARDIOVASCULAR DISEASE IN PEOPLE WITH DEPRESSION

Jay Sunil Patel (11521522) 20 December 2021 (has links)
<p>Systemic inflammation is one potential mechanism underlying the depression to cardiovascular disease (CVD) relationship. In addition, somatic rather than cognitive/affective symptoms of depression may be more predictive of poorer CVD outcomes due to systemic inflammation. However, the small existing literature in this area has yielded mixed results. Therefore, the present study aimed to examine longitudinal associations between depressive symptom clusters and inflammatory biomarkers implicated in CVD (i.e., interleukin-6, IL-6; and C-reactive protein, CRP) using data from the eIMPACT trial.<b> </b>In addition, race was examined as a moderator given findings from two previous studies. </p> <p>The eIMPACT trial was a phase II, single-center randomized controlled trial comparing 12 months of the eIMPACT intervention to usual primary care for depression. Participants were 216 primary care patients aged ≥ 50 years with a depressive disorder and CVD risk factors but no clinical CVD from a safety net healthcare system (<i>M<sub>age</sub></i> = 58.7 years, 78% female, 50% Black, <i>M</i><i><sub>education</sub></i> = 12.8 years). Depressive symptoms clusters (i.e., somatic and cognitive/affective clusters) were assessed using the Patient Health Questionnaire-9 (PHQ-9). IL-6 and high-sensitivity CRP were assessed by the local clinical research laboratory using R&D Systems ELISA kits. Change variables were modeled in MPlus using a latent difference score approach. </p> <p>The results of this study were largely null. Very few associations between depressive symptom clusters and inflammatory biomarkers implicated in CVD were observed, and the detected relationships may be due to type I error. Similarly, only one association was observed for race as a moderator, and the detected relationship may be due to type I error. The present findings do not provide strong support for the longitudinal associations between depressive symptom clusters and inflammatory biomarkers implicated in CVD nor the moderating effects of race. However, the present findings do not rule out the possibility of these relationships given important study limitations, such as study design and power. Future prospective cohort studies with multiple waves of data collection are needed to determine the longitudinal associations between depression facets and various inflammatory biomarkers implicated in CVD. In addition, a biologically-based approach to identifying facets of depression – e.g., the endophenotype model – may provide a clearer understanding of the depression-inflammation relationship.</p>
6

A Hybrid Non-Clustered Bitmap Index for Supporting High Cardinality Attributes

Pendharkar, Yogesh January 2009 (has links)
No description available.
7

Distributed computing with the Raspberry Pi

Dye, Brian January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Mitchell Neilsen / The Raspberry Pi is a versatile computer for its size and cost. The research done in this project will explore how well the Raspberry Pi performs in a clustered environment. Using the Pi as the components of a Beowulf cluster will produce an inexpensive and small cluster. The research includes constructing the cluster as well as running a computationally intensive program called OpenFOAM. The Pi cluster's performance will be measured using the High Performance Linpack benchmark. The Raspberry Pi is already used for basic computer science education and in a cluster can also be used to promote more advanced concepts such as parallel programming and high performance computing. The inexpensive cost of the cluster combined with its compact sizing would make a viable alternative for educational facilities that don't own, or can't spare, their own production clusters for educational use. This also could see use with researchers running computationally intensive programs locally on a personal cluster. The cluster produced was an eight node Pi cluster that generates up to 2.365 GFLOPS.
8

Clustered Test Execution using Java PathFinder

Chocka Narayanan, Sowmiya 29 October 2010 (has links)
Recent advances in test automation have seen a host of new techniques for automated test generation, which traditionally has largely been a manual and expensive process. These techniques have enabled generation of much larger numbers of tests at a much reduced cost. When executed successfully, these tests enable a significant increase in our confidence in the program's correctness. However, as our ability to generate greater numbers of tests increases, we are faced with the problem of the likely high cost of executing all the tests in terms of the total execution time. This thesis presents a novel approach - clustered test execution - to address this problem. Instead of executing each test case separately, we execute parts of several tests using a single execution, which then forks into several directions as the behaviors of the tests differ. Our insight is that in a large test suite, several tests are likely to have common initial execution segments, which do not have to be executed over and over again; rather such a segment could be executed once and the execution result shared across all those tests. As an enabling technology we use the Java PathFinder(JPF) model checker, which is a popular explicit-state model checker for Java programs. Experimental results show that our clustering approach for test execution using JPF provides speed-ups over executing each test in turn from a test suite on the JPF java virtual machine. / text
9

Integrated Scheduling For Clustered VLIW Processors

Nagpal, Rahul 12 1900 (has links)
Clustered architecture processors are preferred for embedded systems because centralized register file architectures scale poorly in terms of clock rate, chip area, and power consumption. Scheduling for clustered architectures involves spatial concerns (where to schedule) as well as temporal concerns (when to schedule). Various clustered VLIW configurations, connectivity types, and inter-cluster communication models present different performance trade-offs to a scheduler. The scheduler is responsible for resolving the conflicting requirements of exploiting the parallelism offered by the hardware and limiting the communication among clusters to achieve better performance. Earlier proposals for cluster scheduling fall into two main categories, viz., phase-decoupled scheduling and phase-coupled scheduling and they focus on clustered architectures which provide inter-cluster communication by an explicit inter-cluster copy operation. However, modern commercial clustered architectures provide snooping capabilities (apart from the support for inter-cluster communication using an explicit MV operation) by allowing some of the functional units to read operands from the register file of some of the other clusters without any extra delay. The phase-decoupled approach of scheduling suffers from the well known phase-ordering problem which becomes severe for such a machine model (with snooping) because communication and resource constraints are tightly coupled and thus are exposed only during scheduling. Tight integration of communication and resource constraints further requires taking into account the resource and communication requirements of other instructions ready to be scheduled in the current cycle while binding an instruction, in order to carry out effective binding. However, earlier proposals on integrated scheduling consider instructions and clusters for binding using a fixed order and thus they show different widely varying performance characteristics in terms of execution time and code size. Other shortcomings of earlier integrated algorithms (that lead to suboptimal cluster scheduling decisions) are due to non-consideration of future communication (that may arise due to a binding) and functional unit binding. In this thesis, we propose a pragmatic scheme and also a generic graph matching based framework for cluster scheduling based on a generic and realistic clustered machine model. The proposed scheme effectively utilizes the exact knowledge of available communication slots, functional units, and load on different clusters as well as future resource and communication requirements known only at schedule time to attain significant performance improvement without code size penalty over earlier algorithms. The proposed graph matching based framework for cluster scheduling resolves the phase-ordering and fixed-ordering problem associated with scheduling on clustered VLIW architectures. The framework provides a mechanism to exploit the slack of instructions by dynamically varying the freedom available in scheduling an instruction and hence the cost of scheduling an instruction using different alternatives to reduce the inter-cluster communication. An experimental evaluation of the proposed framework and some of the earlier proposals is presented in the context of a state-of-art commercial clustered architecture.
10

Bayesian mediation analysis for partially clustered designs

Chu, Yiyi 05 December 2013 (has links)
Partially clustered design is common in medicine, social sciences, intervention and psychological research. With some participants clustered and others not, the structure of partially clustering data is not parallel. Despite its common occurrence in practice, limited attention has been given regarding the evaluation of intervention effects in partially clustered data. Mediation analysis is used to identify the mechanism underlying the relationship between an independent variable and a dependent variable via a mediator variable. While most of the literature is focused on conventional frequentist mediation models, no research has studied a Bayesian mediation model in the context of a partially clustered design yet. Therefore, the primary objectives of this paper are to address conceptual considerations in estimating the mediation effects in the partially clustered randomized designs, and to examine the performances of the proposed model using both simulated data and real data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 (ECLS-K). A small-scale simulation study was also conducted and the results indicate that under large sample sizes, negligible relative parameter bias was found in the Bayesian estimates of the indirect effects and of covariance between the components of the indirect effect. Coverage rates for the 95% credible interval for these two estimates were found to be close to the nominal level. These results supported use of the proposed Bayesian model for partially clustered mediation in conditions when the sample size is moderately large. / text

Page generated in 0.0435 seconds