• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 42
  • 12
  • 7
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 82
  • 17
  • 14
  • 13
  • 12
  • 11
  • 11
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Using a Diffusive Approach for Load Balancing in Peer-to-peer Systems

Qiao, Ying January 2012 (has links)
We developed a diffusive load balancing scheme that equalizes the available capacities of nodes in a peer-to-peer (P2P) system. These nodes may have different resource capacities, geographic locations, or availabilities (i.e., length of time being part of the peer-to-peer system). The services on these nodes may have different service times and arrival rates of requests. Using the diffusive scheme, the system is able to maintain similar response times for its services. Our scheme is a modification of the diffusive load balancing algorithms proposed for parallel computing systems. This scheme is able to handle services with heterogeneous resource requirements and P2P nodes with heterogeneous capacities. We also adapted the diffusive scheme to clustered peer-to-peer system, where a load balancing operation may move services or nodes between clusters. After a literature survey of this field, this thesis investigates the following issues using analytical reasoning and extensive simulation studies. The load balancing operations equalize the available capacities of the nodes in a neighborhood to their averages. As a result, the available capacities of all nodes in the P2P system converge to a global average. We found that this convergence is faster when the scheme uses neighborhoods defined by the structure of the structured P2P overlay network rather than using randomly selected neighbors. For a system with churn (i.e. nodes joining and leaving), the load balancing operations maintain the standard deviation of the available capacities of nodes within a bound. This bound depends on the amount of churn and the frequency of load balancing operations, as well as on the capacities of the nodes. However, the sizes of the services have little impact on this bound. In a clustered peer-to-peer system, the size of the bound largely depends on the average cluster size. When nodes are moved among clusters for load balancing, the numbers of cluster splits and merges are reduced. This may reduce the maintenance cost of the overlay network.
52

Modelling children under five mortality in South Africa using copula and frailty survival models

Mulaudzi, Tshilidzi Benedicta January 2022 (has links)
Thesis (Ph.D. (Statistics)) -- University of Limpopo, 2022 / This thesis is based on application of frailty and copula models to under five child mortality data set in South Africa. The main purpose of the study was to apply sample splitting techniques in a survival analysis setting and compare clustered survival models considering left truncation to the under five child mortality data set in South Africa. The major contributions of this thesis is in the application of the shared frailty model and a class of Archimedean copulas in particular, Clayton-Oakes copula with completely monotone generator, and introduction of sample splitting techniques in a survival analysis setting. The findings based on shared frailty model show that clustering effect was sig nificant for modelling the determinants of time to death of under five children, and revealed the importance of accounting for clustering effect. The conclusion based on Clayton-Oakes model showed association between survival times of children from the same mother. It was found that the parameter estimates for the shared frailty and the Clayton-Oakes models were quite different and that the two models cannot be comparable. Gender, province, year, birth order and whether a child is part of twin or not were found to be significant factors affect ing under five child mortality in South Africa. / NRF-TDG Flemish Interuniversity Council Institutional corporation (VLIR-IUC) VLIR-IUC Programme of the University of Limpopo
53

Development and Evaluation of Sequence Typing Assays for investigating the Epidemiology of Mycoplasma synoviae Outbreaks in Poultry

El-Gazzar, Mohamed Medhat 24 June 2014 (has links)
No description available.
54

DEVELOPMENT OF AN INKJET PRINTER AND A NOVEL DESIGN FOR APERIODIC CLUSTERED-DOT HALFTONE

Sige Hu (19184296) 22 July 2024 (has links)
<p dir="ltr"> Nowadays, inkjet printers are widely used all around the world. But how do they transfer the digital image to a map that can control nozzle firing? In this preliminary report, we briefly illustrate that part of the printing pipeline that starts from a halftone image and ends with Hardware Ready Bits (HRBs). We also describe the implementation of the multi-pass printing method with a designed print mask. HRBs are used to read an input halftone CMY image and output a binary map of each color to decide whether or not to eject the corresponding coloranr drop at each pixel position. In general, for an inkjet printer, each row of the image corresponds to one specific nozzle in each swath so that each swath will be the height of the printhead \cite{torpey1997multipass}. To avoid visible white streaks due to clogged or burned out color nozzles, the method called multi-pass printing is implemented. Subsequently, the print mask is introduced so that we can decide during which pass each pixel should be printed. Once we figure out how to transfer the digital image to our printing signals. We start to think about improving the color performance for the inkjet printer. In one of our previous papers \cite{wang2020developing}, we described the color management pipeline that was applied to our nail inkjet printer, which is used to map the source gamut to the destination printer gamut. However, the resulting prints are not as vivid as we would like to have, since those prints are not well saturated. To obtain more saturated prints, we propose a saturation enhancement method based on image segmentation and hue angle. This method will not necessarily give us the closest representation of the colors within the input image but could give us more saturated prints. The main idea of our saturation enhancement method is to keep the lightness and hue constant, while stretching the chroma component.</p><p><br></p><p dir="ltr"> In one of our previous papers \cite{hu2021improving}, we mostly focused on the color saturation problem in our inkjet printer. However, our partner reported that there are some boundary noise pixels on the background, which are quite visible when the background is white. By checking the pipeline of our printing procedure, we realized that the noise stray dots are generated during the halftoning procedure. This part of the dissertation is dedicated to separating the white background from the foreground, which enables us to constrain the error diffusion process inside the white background. The main idea is to apply image segmentation, which could help us to precisely extract the background.</p><p><br></p><p dir="ltr"> Lastly, inspired by the paper \cite{smith2023chiral}, we decided to design an aperiodic clustered-dot screen, which may have better performance compared to the current DBS screen. This screen generation method is offline, so the time cost is not our main consideration. The output halftone result is what we concentrate on. This screen is generated based on a polygon shape, which is called tile(1,1) defined by the paper \cite{smith2023chiral}. We keep extending this single polygon shape to obtain the combination aperiodic shape that is called a supertile. After obtaining the final supertile, we assigned each tile(1,1) shape to either a dot or a hole based on the complementary symmetry property. Finally, based on some interpolation methods, we generate the threshold matrix. </p>
55

Base excision repair of radiation-induced DNA damage in mammalian cells

Cooper, Sarah Louise Pamela January 2013 (has links)
A specific feature of ionising radiation is the formation of clustered DNA damage, where two or more lesions form within one to two helical turns of the DNA induced by a single radiation track. The complexity of ionising radiation-induced DNA damage increases with increasing ionisation density and it has been shown that complex DNA damage has reduced efficiency of repairability. In mammalian cells, base excision repair (BER) is the predominant pathway for the repair of non-DSB clustered DNA lesions and is split into two sub-pathways known as short patch (SP) BER and long patch (LP) BER. SP-BER is the predominant pathway, especially in the repair of isolated DNA lesions. However, LP-BER is thought to play a greater role in the repair of radiation-induced clustered lesions. In this study, cell lines were generated stably expressing the fluorescently tagged BER proteins, XRCC1-YFP (marker for SP-BER) or FEN1-GFP (marker for LP-BER). The recruitment and loss of XRCC1-YFP and FEN1-GFP to sites of DNA damage induced by both ultrasoft X-ray (USX), a form of low linear energy transfer (LET) radiation, and near infrared (NIR) laser microbeam irradiation (‘mimic’ high LET radiation) was visualised in real-time and the decay kinetics of the fluorescently-tagged proteins determined. The half-life of fluorescence decay of FEN1-GFP following USX irradiation was longer than that of XRCC1-YFP, indicating that LP-BER is a slower process than SP-BER. Additionally, the fluorescence decay of XRCC1-YFP after NIR laser microbeam irradiation was fitted by bi-exponential decays with a fast component and a slow component, reflecting the involvement of XRCC1 in the repair of different types of DNA damage. In contrast to USX irradiation, where the XRCC1-YFP fluorescence decay reached background levels by 20 min, XRCC1-YFP still persisted at some of the NIR laser induced DNA damage sites even after 4 hours. This is consistent with the fact that the laser induces more complex damage that presents a major challenge to the repair proteins, persisting for much longer than the simple damage caused by low LET USX irradiation. Persistent, unrepaired DNA damage can potentially lead to mutations and replication-induced DSBs if it persists into S-phase. PARP1 inhibition reduced the recruitment of XRCC1 to DNA damage sites. However, a considerable amount of XRCC1 was still detected at the DNA damage sites, leading to the conclusion that there is a subset of DNA damage that requires XRCC1 but not PARP1 for repair. Understanding how clustered damage is repaired by the BER pathway can aid the design of future therapies which can be used in combination with radiotherapy to enhance the radiosensitisation effect. Knockdown of FEN1 was investigated and found to radiosensitise A549 (adenocarcinoma) cells, possibly as a result of an excess of unrepaired radiation-induced lesions requiring LP-BER for repair, although FEN1 knockdown alone induced cell death in non-cancerous BEAS-2B cells.
56

Alguns métodos de amostragem para populações raras e agrupadas / Some sampling methods for rare and clustered populations

Affonso, Luis Henrique Teixeira Alves 11 April 2008 (has links)
Em diversos levantamentos científicos, nos deparamos com a dificuldade de coletar os dados devido ao objeto em estudo ser de difícil observação, como por exemplo em estudos com indivíduos portadores de doenças raras, ou dotados de um comportamento evasivo, ou ainda indivíduos que distribuem-se de maneira geograficamente esparsa. Neste trabalho estudamos esquemas de amostragem voltados para populações raras com especial atenção às populações raras e agrupadas. Nos aprofundamos nas técnicas de amostragem por conglomerados adaptativos e amostragem seqüencial em dois estágios, fornecendo ao leitor subsídio teórico para entender os fundamentos das técnicas, bem como compreender a eficácia de seus estimadores apresentada em estudos de simulações. Em nossos estudos de simulação, mostramos que a técnica de amostragem seqüencial em dois estágios não apresenta perdas de eficiência quando o agrupamento dos elementos é menor. Entretanto, os estudos comparativos revelam que quando a população é rara e agrupada, a eficiência para a amostragem por conglomerados adaptativos é maior na maioria das parametrizações utilizadas. Ao final deste trabalho, fornecemos recomendações para as situações a respeito do conhecimento da raridade e agrupamento da população em estudo. / In many surveys we find hard observing individuals, like in rare diseases, elusive individuals or sparsely distributed individuals. This work is about sampling schemes for rare populations, more specifically rare and clustered, driving our attention to adaptive cluster sampling and two stage sequential sampling giving readers their theoretical basis and simulated efficiencies evaluation. In our simulation studies, we found that the efficiency of two-stage sequential sampling does not decrease when sample clustering is low. However, the comparison studies show that when sample is rare and clustered, adaptive cluster sampling in the majority of tested cases has better efficiency. At the end of this study, there are recommendations for each situation of knowing rarity and clustering of the population in study.
57

Integrated Optimal Code Generation for Digital Signal Processors

Bednarski, Andrzej January 2006 (has links)
<p>In this thesis we address the problem of optimal code generation for irregular architectures such as Digital Signal Processors (DSPs).</p><p>Code generation consists mainly of three interrelated optimization tasks: instruction selection (with resource allocation), instruction scheduling and register allocation. These tasks have been discovered to be NP-hard for most architectures and most situations. A common approach to code generation consists in solving each task separately, i.e. in a decoupled manner, which is easier from a software engineering point of view. Phase-decoupled compilers produce good code quality for regular architectures, but if applied to DSPs the resulting code is of significantly lower performance due to strong interdependences between the different tasks.</p><p>We developed a novel method for fully integrated code generation at the basic block level, based on dynamic programming. It handles the most important tasks of code generation in a single optimization step and produces an optimal code sequence. Our dynamic programming algorithm is applicable to small, yet not trivial problem instances with up to 50 instructions per basic block if data locality is not an issue, and up to 20 instructions if we take data locality with optimal scheduling of data transfers on irregular processor architectures into account. For larger problem instances we have developed heuristic relaxations.</p><p>In order to obtain a retargetable framework we developed a structured architecture specification language, xADML, which is based on XML. We implemented such a framework, called OPTIMIST that is parameterized by an xADML architecture specification.</p><p>The thesis further provides an Integer Linear Programming formulation of fully integrated optimal code generation for VLIW architectures with a homogeneous register file. Where it terminates successfully, the ILP-based optimizer mostly works faster than the dynamic programming approach; on the other hand, it fails for several larger examples where dynamic programming still provides a solution. Hence, the two approaches complement each other. In particular, we show how the dynamic programming approach can be used to precondition the ILP formulation.</p><p>As far as we know from the literature, this is for the first time that the main tasks of code generation are solved optimally in a single and fully integrated optimization step that additionally considers data placement in register sets and optimal scheduling of data transfers between different registers sets.</p>
58

Integrated Optimal Code Generation for Digital Signal Processors

Bednarski, Andrzej January 2006 (has links)
In this thesis we address the problem of optimal code generation for irregular architectures such as Digital Signal Processors (DSPs). Code generation consists mainly of three interrelated optimization tasks: instruction selection (with resource allocation), instruction scheduling and register allocation. These tasks have been discovered to be NP-hard for most architectures and most situations. A common approach to code generation consists in solving each task separately, i.e. in a decoupled manner, which is easier from a software engineering point of view. Phase-decoupled compilers produce good code quality for regular architectures, but if applied to DSPs the resulting code is of significantly lower performance due to strong interdependences between the different tasks. We developed a novel method for fully integrated code generation at the basic block level, based on dynamic programming. It handles the most important tasks of code generation in a single optimization step and produces an optimal code sequence. Our dynamic programming algorithm is applicable to small, yet not trivial problem instances with up to 50 instructions per basic block if data locality is not an issue, and up to 20 instructions if we take data locality with optimal scheduling of data transfers on irregular processor architectures into account. For larger problem instances we have developed heuristic relaxations. In order to obtain a retargetable framework we developed a structured architecture specification language, xADML, which is based on XML. We implemented such a framework, called OPTIMIST that is parameterized by an xADML architecture specification. The thesis further provides an Integer Linear Programming formulation of fully integrated optimal code generation for VLIW architectures with a homogeneous register file. Where it terminates successfully, the ILP-based optimizer mostly works faster than the dynamic programming approach; on the other hand, it fails for several larger examples where dynamic programming still provides a solution. Hence, the two approaches complement each other. In particular, we show how the dynamic programming approach can be used to precondition the ILP formulation. As far as we know from the literature, this is for the first time that the main tasks of code generation are solved optimally in a single and fully integrated optimization step that additionally considers data placement in register sets and optimal scheduling of data transfers between different registers sets.
59

Family-centered Care Delivery: Comparing Models of Primary Care Service Delivery in Ontario

Mayo-Bruinsma, Liesha 04 May 2011 (has links)
Family-centered care (FCC) focuses on considering the family in planning/implementing care and is associated with increased patient satisfaction. Little is known about factors that influence FCC. Using linear mixed modeling and Generalized Estimating Equations to analyze data from a cross-sectional survey of primary care practices in Ontario, this study sought to determine whether models of primary care service delivery differ in their provision of FCC and to identify characteristics of primary care practices associated with FCC. Patient-reported scores of FCC were high, but did not differ significantly among primary care models. After accounting for patient characteristics, practice characteristics were not significantly associated with patient-reported FCC. Provider-reported scores of FCC were significantly higher in Community Health Centres than in Family Health Networks. Higher numbers of nurse practitioners and clinical services on site were associated with higher FCC scores but scores decreased as the number of family physicians at a site increased.
60

Family-centered Care Delivery: Comparing Models of Primary Care Service Delivery in Ontario

Mayo-Bruinsma, Liesha 04 May 2011 (has links)
Family-centered care (FCC) focuses on considering the family in planning/implementing care and is associated with increased patient satisfaction. Little is known about factors that influence FCC. Using linear mixed modeling and Generalized Estimating Equations to analyze data from a cross-sectional survey of primary care practices in Ontario, this study sought to determine whether models of primary care service delivery differ in their provision of FCC and to identify characteristics of primary care practices associated with FCC. Patient-reported scores of FCC were high, but did not differ significantly among primary care models. After accounting for patient characteristics, practice characteristics were not significantly associated with patient-reported FCC. Provider-reported scores of FCC were significantly higher in Community Health Centres than in Family Health Networks. Higher numbers of nurse practitioners and clinical services on site were associated with higher FCC scores but scores decreased as the number of family physicians at a site increased.

Page generated in 0.0458 seconds