• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3200
  • 962
  • 514
  • 325
  • 277
  • 160
  • 74
  • 65
  • 60
  • 53
  • 52
  • 27
  • 26
  • 23
  • 23
  • Tagged with
  • 6985
  • 1290
  • 655
  • 647
  • 610
  • 567
  • 555
  • 477
  • 464
  • 416
  • 347
  • 346
  • 340
  • 331
  • 330
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Self-management and participatory schemes in co-operatives : a comparative study of self-management in industrial co-operatives in the Greater Accra Region, Ghana

Ofei, Kwadwo Ansah January 1996 (has links)
This research study investigates the extent to which participatory schemes determine member participation and control in industrial cooperatives in Ghana. Recent studies on co-operative organizations in developing countries have indicated that the problems of self-management in co-operatives are due to low member participation in decision making and control over the affairs of co-operatives. These studies, coming from mainly sociological and anthropological studies, have further suggested that the low member participation and control in co-operatives are due to the problems in the implementation of the principles and ideals of co-operatives in developing countries. The studies have further argued that principles and ideals of co-operatives are difficult to implement because the are incompatible with the traditional social structures and norms in developing countries. A central argument of this study is that the problems of member participation and control in co-operatives should not be attributed solely to the influences of environmental factors in developing societies. The study points out that the degree of member participation and control in a co-operative will also be related to properties of the participatory schemes in the co-operatives. That is, the structures and processes along which participation takes place. The findings of the research study indicate that fundamental determinants of member participation and control are the structural attributes of participatory schemes in the co-operatives. The findings of the study also suggest that the participatory schemes in the cooperatives are influenced by the organizational conditions in the cooperatives. On the basis these findings, the research has contributed to our knowledge of the organization and the functioning of co-operatives in developing countries. Furthermore, the research has demonstrated the possibilities of the extension of modern organization theory to the study self-help and related self-managed enterprises in developing countries.
192

On the routability-driven placement. / CUHK electronic theses & dissertations collection

January 2013 (has links)
He, Xu. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2013. / Includes bibliographical references (leaves [127]-135). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts also in Chinese.
193

Random interacting particle systems

Gracar, Peter January 2018 (has links)
Consider the graph induced by Z^d, equipped with uniformly elliptic random conductances on the edges. At time 0, place a Poisson point process of particles on Z^d and let them perform independent simple random walks with jump probabilities proportional to the conductances. It is well known that without conductances (i.e., all conductances equal to 1), an infection started from the origin and transmitted between particles that share a site spreads in all directions with positive speed. We show that a local mixing result holds for random conductance graphs and prove the existence of a special percolation structure called the Lipschitz surface. Using this structure, we show that in the setup of particles on a uniformly elliptic graph, an infection also spreads with positive speed in any direction. We prove the robustness of the framework by extending the result to infection with recovery, where we show positive speed and that the infection survives indefinitely with positive probability.
194

Distances in preferential attachment networks

Mönch, Christian January 2013 (has links)
Preferential attachment networks with power law degree sequence undergo a phase transition when the power law exponent τ changes. For τ > 3 typical distances in the network are logarithmic in the size of the network and for 2 < τ < 3 they are doubly logarithmic. In this thesis, we identify the correct scaling constant for τ ∈ (2, 3) and discover a surprising dichotomy between preferential attachment networks and networks without preferential attachment. This contradicts previous conjectures of universality. Moreover, using a model recently introduced by Dereich and Mörters, we study the critical behaviour at τ = 3, and establish novel results for the scale of the typical distances under lower order perturbations of the attachment function.
195

Terrestrial and Micro-Gravity Studies in Electrohydrodynamic Conduction-Driven Heat Transport Systems

Patel, Viral K. 25 March 2015 (has links)
Electrohydrodynamic (EHD) phenomena involve the interaction between electrical and flow fields in a dielectric fluid medium. In EHD conduction, the electric field causes an imbalance in the dissociation-recombination reaction of neutral electrolytic species, generating free space charges which are redistributed to the vicinity of the electrodes. Proper asymmetric design of the electrodes generates net axial flow motion, pumping the fluid. EHD conduction pumps can be used as the sole driving mechanism for small-scale heat transport systems because they have a simple electrode design, which allows them to be fabricated in exceedingly compact form (down to micro-scale). EHD conduction is also an effective technique to pump a thin liquid film. However, before specific applications in terrestrial and micro-gravity thermal management can be developed, a better understanding of the interaction between electrical and flow fields with and without phase-change and in the presence and absence of gravity is needed. With the above motivation in mind, detailed experimental work in EHD conduction-driven single- and two-phase flow is carried out. Two major experiments are conducted both terrestrially and on board a variable gravity parabolic flight. Fundamental behavior and performance evaluation of these electrically driven heat transport systems in the respective environments are studied. The first major experiment involves a meso-scale, single-phase liquid EHD conduction pump which is used to drive a heat transport system in the presence and absence of gravity. The terrestrial results include fundamental observations of the interaction between two-phase flow pressure drop and EHD pump net pressure generation in meso-scale and short-term/long-term, single- and two-phase flow performance evaluation. The parabolic flight results show operation of a meso-scale EHD conduction-driven heat transport system for the first time in microgravity. The second major experiment involves liquid film flow boiling driven by EHD conduction in the presence and absence of gravity. The terrestrial experiments investigate electro-wetting of the boiling surface by EHD conduction pumping of liquid film, resulting in enhanced heat transfer. Further research to analyze the effects on the entire liquid film flow boiling regime is conducted through experiments involving nanofiber-enhanced heater surfaces and dielectrophoretic force. In the absence of gravity, the EHD-driven liquid film flow boiling process is studied for the first time and valuable new insights are gained. It is shown that the process can be sustained in micro-gravity by EHD conduction and this lays the foundation for future experimental research in electrically driven liquid film flow boiling. The understanding gained from these experiments also provides the framework for unique and novel heat transport systems for a wide range of applications in different scales in terrestrial and microgravity conditions.
196

Cosmological dynamics and structure formation

Gosenca, Mateja January 2018 (has links)
Observational surveys which probe our universe deeper and deeper into the nonlinear regime of structure formation are becoming increasing accurate. This makes numerical simulations an essential tool for theory to be able to predict phenomena at comparable scales. In the first part of this thesis we study the behaviour of cosmological models involving a scalar field. We are particularly interested in the existence of fixed points of the dynamical system and the behaviour of the system in their vicinity. Upon addition of spatial curvature to the single-scalar field model with an exponential potential, canonical kinetic term, and a matter fluid, we demonstrate the existence of two extra fixed points that are not present in the case without curvature. We also analyse the evolution of the equation-of-state parameter. In the second part, we numerically simulate collisionless particles in the weak field approximation to General Relativity, with large gradients of the fields and relativistic velocities allowed. To reduce the complexity of the problem and enable high resolution simulations, we consider the spherically symmetric case. Comparing numerical solutions to the exact Schwarzschild and Lemaître-Tolman-Bondi solutions, we show that the scheme we use is more accurate than a Newtonian scheme, correctly reproducing the leading-order post-Newtonian behaviour. Furthermore, by introducing angular momentum, configurations corresponding to bound objects are found. In the final part, we simulate the conditions under which one would expect to form ultracompact minihalos, dark matter halos with a steep power-law profile. We show that an isolated object exhibits the profile predicted analytically. Embedding this halo in a perturbed environment we show that its profile becomes progressively more similar to the Navarro-Frenk-White profile with increasing amplitude of perturbations. Next, we boost the power spectrum at a very early redshift during radiation domination on a chosen scale and simulate clustering of dark matter particles at this scale until low redshift. In this scenario halos form earlier, have higher central densities, and are more compact.
197

Trace-based post-silicon validation for VLSI circuits. / CUHK electronic theses & dissertations collection

January 2012 (has links)
The ever-increasing design complexity of modern circuits challenges our ability to verify their correctness. Therefore, various errors are more likely to escape the pre-silicon verification process and to manifest themselves after design tape-out. To address this problem, effective post-silicon validation is essential for eliminating design bugs before integrated circuit (IC) products shipped to customers. In the debug process, it becomes increasingly popular to insert design-for-debug (DfD) structures into the original design to facilitate real-time debug without intervening the circuits’ normal operation. For this so-called trace-based post-silicon validation technique, the key question is how to design such DfD circuits to achieve sufficient observability and controllability during the debug process with limited hardware overhead. However, in today’s VLSI design flow, this is unfortunately conducted in a manual fashion based on designers’ own experience, which cannot guarantee debug quality. To tackle this problem, we propose a set of automatic tracing solutions as well as innovative DfD designs in this thesis. First, we develop a novel trace signal selection technique to maximize the visibility on debugging functional design errors. To strengthen the capability for tackling these errors, we sequentially introduce a multiplexed signal tracing strategy with a trace signal grouping algorithm for maximizing the probability of catching the propagated evidences from functional design errors. Then, to effectively localize speedpathrelated electrical errors, we propose an innovative trace signal selection solution as well as a trace qualification technique. On the other hand, we introduce several low-cost interconnection fabrics to effectively transfer trace data in post-silicon validation. We first propose to reuse the existing test channel for real-time trace data transfer, so that the routing cost of debug hardware is dramatically reduced. The method is further improved to avoid data corruption in multi-core debug. We then develop a novel interconnection fabric design and optimization technique, by combining multiplexor network and non-blocking network, to achieve high debug flexibility with minimized hardware cost. Moreover, we introduce a hybrid trace interconnection fabric that is able to tolerate unknown values in “golden vectors“, at the cost of little extra DfD overhead. With the fabric, we develop a systematic signal tracing procedure to automatically localize erroneous signals with just a few debug runs. Our empirical evaluation shows that the solutions presented in this thesis can greatly improve the validation quality of VLSI circuits, and ultimately enable the design and fabrication of reliable electronic devices. / Liu, Xiao. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 143-152). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract --- p.i / Acknowledgement --- p.iv / Preface --- p.vii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- VLSI Design Trends and Validation Challenges --- p.1 / Chapter 1.2 --- Key Contributions and Thesis Outline --- p.4 / Chapter 2 --- State of the Art on Post-Silicon Validation --- p.8 / Chapter 2.1 --- Trace Signal Selection --- p.12 / Chapter 2.2 --- Interconnection Fabric Design for Trace Data Transfer --- p.14 / Chapter 2.3 --- Trace Data Compression --- p.15 / Chapter 2.4 --- Trace-Based Debug Control --- p.16 / Chapter 3 --- Signal Selection for Visibility Enhancement --- p.18 / Chapter 3.1 --- Preliminaries and Summary of Contributions --- p.19 / Chapter 3.2 --- Restorability Formulation --- p.23 / Chapter 3.2.1 --- Terminologies --- p.23 / Chapter 3.2.2 --- Gate-Level Restorabilities --- p.24 / Chapter 3.3 --- Trace Signal Selection --- p.28 / Chapter 3.3.1 --- Circuit Level Visibility Calculation --- p.28 / Chapter 3.3.2 --- Trace Signal Selection Methodology --- p.30 / Chapter 3.3.3 --- Trace Signal Selection Enhancements --- p.31 / Chapter 3.4 --- Experimental Results --- p.34 / Chapter 3.4.1 --- Experiment Setup --- p.34 / Chapter 3.4.2 --- Experimental Results --- p.35 / Chapter 3.5 --- Conclusion --- p.40 / Chapter 4 --- Multiplexed Tracing for Design Error --- p.47 / Chapter 4.1 --- Preliminaries and Summary of Contributions --- p.49 / Chapter 4.2 --- Design Error Visibility Metric --- p.53 / Chapter 4.3 --- Proposed Methodology --- p.56 / Chapter 4.3.1 --- Supporting DfD Hardware for Multiplexed Signal Tracing --- p.58 / Chapter 4.3.2 --- Signal Grouping Algorithm --- p.58 / Chapter 4.4 --- Experimental Results --- p.62 / Chapter 4.4.1 --- Experiment Setup --- p.62 / Chapter 4.4.2 --- Experimental Results --- p.63 / Chapter 4.5 --- Conclusion --- p.66 / Chapter 5 --- Tracing for Electrical Error --- p.68 / Chapter 5.1 --- Preliminaries and Summary of Contributions --- p.69 / Chapter 5.2 --- Observing Speedpath-Related Electrical Errors --- p.71 / Chapter 5.2.1 --- Speedpath-Related Electrical Error Model --- p.71 / Chapter 5.2.2 --- Speedpath-Related Electrical Error Detection Quality --- p.73 / Chapter 5.3 --- Trace Signal Selection --- p.75 / Chapter 5.3.1 --- Relation Cube Extraction --- p.76 / Chapter 5.3.2 --- Signal Selection for Non-Zero-Probability Error Detection --- p.77 / Chapter 5.3.3 --- Trace Signal Selection for Error Detection Quality Enhancement --- p.78 / Chapter 5.4 --- Trace Data Qualification --- p.80 / Chapter 5.5 --- Experimental Results --- p.83 / Chapter 5.6 --- Conclusion --- p.87 / Chapter 6 --- Reusing Test Access Mechanisms --- p.88 / Chapter 6.1 --- Preliminaries and Summary of Contributions --- p.89 / Chapter 6.1.1 --- SoC Test Architectures --- p.89 / Chapter 6.1.2 --- SoC Post-Silicon Validation Architectures --- p.90 / Chapter 6.1.3 --- Summary of Contributions --- p.92 / Chapter 6.2 --- Overview of the Proposed Debug Data Transfer Framework --- p.93 / Chapter 6.3 --- Proposed DfD Structures --- p.94 / Chapter 6.3.1 --- Modified Wrapper Design --- p.95 / Chapter 6.3.2 --- Trace Buffer Interface Design --- p.97 / Chapter 6.4 --- Sharing TAM for Multi-Core Debug Data Transfer --- p.98 / Chapter 6.4.1 --- Core Masking for TestRail Architecture --- p.98 / Chapter 6.4.2 --- Channel Split --- p.99 / Chapter 6.5 --- Experimental Results --- p.101 / Chapter 6.6 --- Conclusion --- p.104 / Chapter 7 --- Interconnection Fabric for Flexible Tracing --- p.105 / Chapter 7.1 --- Preliminaries and Summary of Contributions --- p.106 / Chapter 7.2 --- Proposed Interconnection Fabric Design --- p.111 / Chapter 7.2.1 --- Multiplexer Network for Mutually-Exclusive Signals --- p.111 / Chapter 7.2.2 --- Non-Blocking Concentration Network for Concurrently-Accessible Signals --- p.114 / Chapter 7.3 --- Experimental Results --- p.117 / Chapter 7.4 --- Conclusion --- p.121 / Chapter 8 --- Interconnection Fabric for Systematic Tracing --- p.123 / Chapter 8.1 --- Preliminaries and Summary of Contributions --- p.124 / Chapter 8.2 --- Proposed Trace Interconnection Fabric --- p.128 / Chapter 8.3 --- Proposed Error Evidence Localization Methodology --- p.130 / Chapter 8.4 --- Experimental Results --- p.133 / Chapter 8.4.1 --- Experimental Setup --- p.133 / Chapter 8.4.2 --- Results and Discussion --- p.134 / Chapter 8.5 --- Conclusion --- p.139 / Chapter 9 --- Conclusion --- p.140 / Bibliography --- p.152
198

Towards re-conceptualising and measuring brand identity in services : a consumer perspective

Pareek, Vandana January 2015 (has links)
This thesis focuses on conceptualizing and measuring brand identity in services. The lack of a wider-accepted measure of brand identity is surprising given that it a) provides meaning to the brand, makes it unique and communicates what the brand stands for (Rosengren et al., 2010), and b) is the driver of one of the four principal dimensions of brand equity, namely, brand association (Keller, 1993). Despite its acknowledged importance, brand identity measurement has received remarkably little attention, and efforts to develop a valid and comprehensive measure have been limited. While prior work on brand identity has proposed some conceptual models highlighting different facets that contribute to brand identity development, the majority of these models have not been subjected to empirical testing. This raises concerns over their robustness and validity. More importantly, the applicability of these models to a service context is not clear. For instance, the role of consumers, who participate in the service production process and interact frequently with the service providers, is hardly considered in the prior frameworks. In summary, the dearth of research studies accounting for the consumer perspective of brand identity, along with the lack of a valid and comprehensive scale to measure service brand identity, motivated this research. This thesis thus aims to, first, review and refine the concept of brand identity to account for the consumer perspective of this construct and then develop a multidimensional scale to measure service brand identity and identify its key dimensions. To fulfill the research aims, Churchill‟s (1979) paradigm was followed in conjunction with DeVellis (2003) and other scale development studies (Brakus et al., 2009; Lundstorm & Lamont, 1976). This thesis employed both qualitative and quantitative research methods to achieve the research aims. Qualitative research was undertaken to gain additional insights into the construct (e.g. consumer perspective) and to generate and purify the initial scale items. Quantitative methods were then adopted to validate and establish the final scale. Guided by the aforementioned research design, this thesis developed a service brand identity (SBI) scale consisting of five dimensions labelled: process identity, organization identity, servicescape identity, symbolic identity and communication identity. The analysis confirms that the scale is reliable, valid, and parsimonious. Further, the scale application is demonstrated by assessing and empirically establishing the association between service brand identity and brand trust and loyalty. The results support the proposition that the consumer perspective is important in understanding and developing brand identity in a service context. Relatedly, it is also shown that service elements, such as the servicescape and service process, play a key role in developing a strong brand identity for services. The key contribution of this study is the development of a psychometrically valid and reliable scale. This research extends the literature on brand identity (Upshaw, 1995; Aaker, 1996; De Chernatony, 1999; Kapferer, 2000; Burmann et al., 2009; da Silveira et al., 2013) to include the service domain which has to date not received much research attention in branding. It proposes and empirically establishes two new dimensions of service brand identity (Process Identity and Servicescape Identity) which have not been highlighted in extant brand identity literature. In addition to this, this thesis provides a much-needed consumer perspective on brand identity and its components, thereby responding to calls for more research on marketing constructs to account for the consumer perspective (Rust, 1988; Payne et al., 2009; Arnould et al., 2006). In this regard, this study is among the first to empirically link consumer-based variables to a specific brand identity scale.
199

Challenges and prospects of probing galaxy clustering with three-point statistics

Eggemeier, Alexander January 2018 (has links)
In this work we explore three-point statistics applied to the large-scale structure in our Universe. Three-point statistics, such as the bispectrum, encode information not accessible via the standard analysis method-the power spectrum-and thus provide the potential for greatly improving current constraints on cosmological parameters. They also present us with additional challenges, and we focus on two of these arising from a measurement as well as modelling point of view. The first challenge we address is the covariance matrix of the bispectrum, as its precise estimate is required when performing likelihood analyses. Covariance matrices are usually estimated from a set of independent simulations, whose minimum number scales with the dimension of the covariance matrix. Because there are many more possibilities of finding triplets of galaxies than pairs, compared to the power spectrum this approach becomes rather prohibitive. With this motivation in mind, we explore a novel alternative to the bispectrum: the line correlation function (LCF). It specifically targets information in the phases of density modes that are invisible to the power spectrum, making it a potentially more efficient probe than the bispectrum, which measures a combination of amplitudes and phases. We derive the covariance properties and the impact of shot noise for the LCF and compare these theoretical predictions with measurements from N-body simulations. Based on a Fisher analysis we assess the LCF's sensitivity on cosmological parameters, finding that it is particularly suited for constraining galaxy bias parameters and the amplitude of fluctuations. As a next step we contrast the Fisher information of the LCF with the full bispectrum and two other recently proposed alternatives. We show that the LCF is unlikely to achieve a lossless compression of the bispectrum information, whereas a modal decomposition of the bispectrumcan reduce the size of the covariancematrix by at least an order of magnitude. The second challenge we consider in this work concerns the relation between the dark matter field and luminous tracers, such as galaxies. Accurate knowledge of this galaxy bias relation is required in order to reliably interpret the data gathered by galaxy surveys. On the largest scales the dark matter and galaxy densities are linearly related, but a variety of additional terms need to be taken into account when studying clustering on smaller scales. These have been fully included in recent power spectrumanalyses, whereas the bispectrummodel relied on simple prescriptions that were likely extended beyond their realm of validity. In addition, treating power spectrumand bispectrum on different footings means that the two models become inconsistent on small scales. We introduce a new formalism that allows us to elegantly compute the lacking bispectrum contributions from galaxy bias, without running into the renormalization problem. Furthermore, we fit our new model to simulated data by implementing these contributions into a likelihood code. We show that they are crucial in order to obtain results consistent with those fromthe power spectrum, and that the bispectrum retains its capability of significantly reducing uncertainties in measured parameters when combined with the power spectrum.
200

Higher-order methods for large-scale optimization

Fountoulakis, Kimon January 2015 (has links)
There has been an increased interest in optimization for the analysis of large-scale data sets which require gigabytes or terabytes of data to be stored. A variety of applications originate from the fields of signal processing, machine learning and statistics. Seven representative applications are described below. - Magnetic Resonance Imaging (MRI): A medical imaging tool used to scan the anatomy and the physiology of a body. - Image inpainting: A technique for reconstructing degraded parts of an image. - Image deblurring: Image processing tool for removing the blurriness of a photo caused by natural phenomena, such as motion. - Radar pulse reconstruction. - Genome-Wide Association study (GWA): DNA comparison between two groups of people (with/without a disease) in order to investigate factors that a disease depends on. - Recommendation systems: Classification of data (i.e., music or video) based on user preferences. - Data fitting: Sampled data are used to simulate the behaviour of observed quantities. For example estimation of global temperature based on historic data. Large-scale problems impose restrictions on methods that have been so far employed. The new methods have to be memory efficient and ideally, within seconds they should offer noticeable progress towards a solution. First-order methods meet some of these requirements. They avoid matrix factorizations, they have low memory requirements, additionally, they sometimes offer fast progress in the initial stages of optimization. Unfortunately, as demonstrated by numerical experiments in this thesis, first-order methods miss essential information about the conditioning of the problems, which might result in slow practical convergence. The main advantage of first-order methods which is to rely only on simple gradient or coordinate updates becomes their essential weakness. We do not think this inherent weakness of first-order methods can be remedied. For this reason, the present thesis aims at the development and implementation of inexpensive higher-order methods for large-scale problems.

Page generated in 0.035 seconds