• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 966
  • 207
  • 179
  • 137
  • 116
  • 73
  • 72
  • 44
  • 37
  • 28
  • 27
  • 20
  • 20
  • 18
  • 9
  • Tagged with
  • 2218
  • 2218
  • 535
  • 237
  • 173
  • 159
  • 150
  • 141
  • 134
  • 127
  • 126
  • 120
  • 118
  • 116
  • 112
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
381

New care home admission following acute hospitalisation : a mixed methods approach

Burton, Jennifer Kirsty January 2018 (has links)
Care home admission following acute hospitalisation is a lived reality across Scotland, experienced by over 8,000 people annually. The aim of this thesis was to develop an understanding of new care home admission following acute hospitalisation. Methods and findings from the mixed-methods approach are presented in three parts. Part One: Identifying relevant research - includes a review of quality assessment tools for systematic reviewing; a systematic review and meta-analysis of quantitative data from observational studies of predictors of care home admission from hospital; and a methodological chapter on developing a search filter to improve accessibility of existing research findings supported by the findings of an international survey of care home researchers. The systematic review identified 53 relevant studies from 16 countries comprising a total population of 1,457,881 participants. Quantitative synthesis of the results from 11 of the studies found that increased age (OR 1.02 per year increase; 95%CI 1.00-1.04), female sex (OR 1.41; 95%CI 1.03-1.92), dementia & cognitive impairment (OR 2.14; 95%CI 1.24-3.70) and functional dependency (OR 2.06; 95%CI 1.58-2.69) were all associated with an increased risk of care home admission after hospitalisation. Despite international variation in service provision, only two studies described the model of care provided in the care home setting. The survey identified that there is a lack of shared terminology in the published literature to describe settings for adults who are unable to live independently in their own homes and require care in a long-term institutional setting. A search filter to identify relevant research could help to overcome differences in terminology and improve synthesis of existing research evidence. Part Two: Exploring current clinical practice - reports the findings of a retrospective cohort study of new care home admissions from hospital using case-note review methodology accompanied by findings from inductive thematic analysis of a single dataset from a qualitative case study design exploring the experiences of a patient, their family, and practitioners (n=5). The cohort study (n=100) found a heterogeneous picture with long hospital admissions (range 14-231 days), frequent transfers of care (31% experienced three or more transfers), varied levels of documented assessment and a lack of documented patient involvement in the decision-making processes. The qualitative interviews allowed the patient voice to emerge, alongside the professional and family narrative which dominated case-note documentation. Inductive thematic analysis identified nine major themes exploring how decisions are made to discharge individuals directly into a care home from the acute hospital setting: biography & personality; professional role; family role; limitations in local model of care; ownership of decision; risk; realising preferences; uncertainty of care home admission process; and psychological impact of in-hospital care. Part Three: Harnessing routinely-collected data - includes the challenges of identifying care home residency at admission and discharge from hospital, presenting analysis of the accuracy of Scottish Morbidity Record 1 (SMR01) coding in NHS Fife and the Community Health Index (CHI) Institution Flag in NHS Fife and NHS Tayside. This is followed by a descriptive analysis of the Scottish Care Home Census (2013-16) as a novel social care data source to explore care home admissions from hospital and the methodology for a data linkage study using these data. Identifying care home residents in routine data sources is challenging. In 18,720 admissions to NHS Fife, SMR01 coding had a sensitivity of 86.0% and positive predictive value of 85.8% in identifying care home residents on admission. At discharge the sensitivity was 87.0% and positive predictive value was 84.5%. From a sample of 10,000 records, the CHI Institution Flag had a sensitivity of 58.6% in NHS Fife and 89.3% in NHS Tayside, with positive predictive values of 99.7% and 97.7% respectively. From 2013-16, of 21,368 admissions to care homes in Scotland, 56.7% were admitted from hospital. There was significant regional variation in rates of care home admission from hospital (35.9-64.7%) and proportion of Local Authority funded places provided to admissions from hospital (34.4-73.9%). Those admitted from hospital appeared to be more dependent and sicker than those admitted from home. This thesis has established a series of challenges in how care homes and their residents are identified. It has questioned the adequacy of the evidence to guide practitioners and sought to raise the profile of this vulnerable and complex population and how best to support them in making decisions regarding admission from the acute hospital. It has progressed our understanding of this under-explored area and proposes a programme of future mixed-methods research involving patients, families, practitioners and policy-makers.
382

Evolving role of shareholders and the future of director primacy theory

Solak, Ekrem January 2018 (has links)
Over the last two decades, US corporate governance has witnessed a significant increase in the incidence and influence of shareholder activism. Shareholder activism, however, has been found to be inconsistent with US corporate governance which is framed within director primacy theory. In this theory, the board is able to carry out a unique combination of managerial and monitoring roles effectively, and shareholders are only capital providers to companies. Shareholder activism is normatively found inimical to effective and efficient decision-making, i.e. the board's authority, and to the long-term interests of public companies. The increasing willingness of institutional shareholders to participate into the decision-making processes of their portfolio companies is at odds with US corporate governance. Therefore, the aim of this thesis is to examine whether director primacy theory should be softened to accommodate greater shareholder activism in US corporate governance. This thesis presents an analysis of the legal rules that reflect director primacy theory. In this respect, US shareholders have traditionally had limited participatory power. The way in which the courts perceived the board's authority also stymied shareholder participation. This thesis considers not only legal and regulatory developments in the wake of the 2007-2008 financial crisis, but also the governance developments through by-law amendments which could potentially make an overall change in the balance of power between shareholders and the board. Shareholders are slowly moving to the centre of corporate governance in the US. History has shown that the board of directors often failed to prevent manager-induced corporate governance failures. This thesis argues that shareholder activism is necessary for improving the web of monitoring mechanisms and for a well-functioning director primacy model. Shareholder activism forces the board to more critical about management, which is a prerequisite for the director primacy model. Therefore, this thesis argues that shareholder activism should therefore be accommodated into US corporate governance. The proposed approach addresses accountability problems more effectively than the current director primacy model while recognising the board authority and enhances decision-making processes of public companies. In this regard, it makes several recommendations to soften the current director primacy model: establishing a level playing for private ordering, adopting the proxy access default regime, the majority voting rule, the universal proxy rules, and enhancing the disclosure requirements of shareholders. The present research also demonstrates that contemporary shareholder activism involves many complexities. It contains different types of shareholder activism, which differ by objectives, tools, and motives. It could be used for purely financial purposes or non-financial purposes or both. Furthermore, the concept of stewardship has been developed to address public interest concerns, namely short-termism in the market and pressures by activist funds through shareholder activism. In this way, this thesis develops a complete positive theory about shareholder activism rather than focussing on a specific type of activism. This complete analytical framework constitutes more reliable basis to draw normative conclusions rather than focussing on a particular type of activism.
383

Dense wireless network design and evaluation : an aircraft cabin use case

Cogalan, Tezcan January 2018 (has links)
One of the key requirements of fifth generation (5G) systems is having a connection to mobile networks without interruption at anytime and anywhere, which is also known as seamless connectivity. Nowadays, fourth generation (4G) systems, Long Term Evolution (LTE) and Long Term Evolution Advanced (LTE-A), are mature enough to provide connectivity to most terrestrial mobile users. However, for airborne mobile users, there is no connection that exists without interruption. According to the regulations, mobile connectivity for aircraft passengers can only be established when the altitude of the aircraft is above 3000 m. Along with demands to have mobile connectivity during a flight and the seamless connectivity requirement of 5G systems, there is a notable interest in providing in-flight wireless services during all phases of a flight. In this thesis, many issues related to the deployment and operation of the onboard systems have been investigated. A measurement and modelling procedure to investigate radio frequency (RF) propagation inside an aircraft is proposed in this thesis. Unlike in existing studies for in-cabin channel characterization, the proposed procedure takes into account the deployment of a multi-cell onboard system. The proposed model is verified through another set of measurements where reference signal received power (RSRP) levels inside the aircraft are measured. The results show that the proposed model closely matches the in-cabin RSRP measurements. Moreover, in order to enforce the distance between a user and an interfering resource, cell sectorization is employed in the multi-cell onboard system deployment. The proposed propagation model is used to find an optimum antenna orientation that minimizes the interference level among the neighbouring evolved nodeBs (eNBs). Once the optimum antenna deployment is obtained, comprehensive downlink performance evaluations of the multi-cell, multi-user onboard LTE-A system is carried out. Techniques that are proposed for LTE-A systems, namely enhanced inter-cell interference coordination (eICIC) and carrier aggregation (CA), are employed in the system analysis. Different numbers of eNBs, antenna mounting positions and scheduling policies are examined. A scheduling algorithm that provides a good tradeoff between fairness and system throughput is proposed. The results show that the downlink performance of the proposed onboard LTE-A system achieves not only 75% of the theoretical limits of the overall system throughput but also fair user data rate performance, irrespective of a passenger's seat location. In order to provide the seamless connectivity requirement of 5G systems, compatibility between the proposed onboard system deployment and the already deployed terrestrial networks is investigated. Simulation based analyses are carried out to investigate power leakage from the onboard systems while the aircraft is in the parked position on the apron. According to the regulations, the onboard system should not increase the noise level of the already deployed terrestrial system by 1 dB. Results show that the proposed onboard communication system can be operated while the aircraft is in the parked position on the apron without exceeding the 1 dB increase in the noise level of the already deployed terrestrial 4G network. Furthermore, handover parameters are obtained for different transmission power levels of both the terrestrial and onboard systems to make the transition from one system to another without interruption while a passenger boards or leaves the aircraft. Simulation and measurement based analyses show that when the RSRP level of the terrestrial system is below -65 dBm around the aircraft, a boarding passenger can be smoothly handed over to the onboard system and vice versa. Moreover, in order to trigger the handover process without interfering with the data transmission, a broadcast control channel (BCCH) power boosting feature is proposed for the in-cabin eNBs. Results show that employing the BCCH power boosting feature helps to trigger the handover process as soon as the passengers step on board the aircraft.
384

Properties and performance of lime mortars for conservation : the role of binder chemistry and curing regime

Figueiredo, Cristiano January 2018 (has links)
The selection of mortar for conservation of historic and heritage buildings can be challenging. Achieving compatibility with the historic fabric, durability and efficient use of materials within a practical timeframe often requires the use of hydraulic lime-based mortars which set more rapidly than the more traditional air lime mortars. These are considered to be more compatible with historic fabric than cement-based mortars, although, due to the modern production techniques and their natural variability, a deeper knowledge of their chemical and physical properties is needed to minimise damage due to incompatibility and make the decision process easier and safer. Natural hydraulic lime (NHL) binders are currently classified under EN 459-1:2015 in three designations, NHL 2, NHL 3.5 and NHL5, with the suffix representing the minimum compressive strength (in MPa) of a standard mortar mix at 28 days. The performance of NHL binders, manufactured by burning a naturally impure limestone, can be difficult to predict due to the inherent variability of both their physical and chemical characteristics. At the same time, the tolerance values for each classification allow for binders with significantly compressive strength differences to be classified by the same designation. The main aim of this research was to study a range of NHL binders, understand and quantify the variability of their characteristics and to establish how these properties influence the performance of mortars cured under standard and simulated weather conditions. In the first stage of the project, a selection of NHL binders from different origins and distinct designation were rigorously examined through physical, chemical and mineralogical characterisation to elucidate surface area, particle size distribution, oxide composition and crystalline phase composition. The characteristics of the binders were found to vary greatly, particularly amongst binders from the same classification and distinct origins, and in one particular case even from the same origin. A change of properties over time was also identified, binders manufactured in different years could have very different properties, even though, as far as could be ascertained from the packaging, it was the same product. Starting from a selection of 11 NHLs and 1 hydrated lime, the next step involved the manufacture of mortar samples using a sand aggregate appropriate for a conservation mortar with 1:2 ratio (binder:aggregate by volume). Sufficient water was added to produce a spread by flow table of 165 ± 10 mm. These mortars were cured under standard conditions and for a smaller group of binders under simulated weather conditions. For the standard cure conditions, the properties of the binders were compared to the physical properties in terms of strength (from 7 to 1080 days), porosity, capillary water absorption, water vapour permeability and freeze-thaw resistance of mortars made with the binders. The carbonation was also studied by phenolphthalein stain after all the flexural strength tests and after 2 years by XRD. The mortars under climate simulation were studied in terms of mechanical properties (up to 360 days) and carbonation. For comparison purposes, cement-lime (1:1:6 and 1:2:9 cement:lime:aggregate volumetric ratio), lime-metakaolin (MK) (with MK addition of 5, 10 and 20% of the lime mass) and lime putty mortars were manufactured to the same workability as the NHL mortars. These were studied in terms of strength up to 360 days, porosity and water absorption by capillarity action. The strength of the studied mortars does not follow the classification of the binders, with one binder, specified as NHL 2, resulting in a stronger mortar than another binder specified as NHL 5, and one NHL 3.5 mortar surpassing all the other mortars in terms of mechanical strength. The mechanical strength was found to correlate with the hydraulic phases, alite and belite, identified within the binders. The relative long-term performance of the mortars manufactured with the different binders can therefore be predicted based on the mineral properties rather than the standard classification. Pore related properties, such as water vapour permeability and water absorption by capillarity, were found to be related to the water/binder ratio of the NHL mortars. Later in the project, using the standard cured mortars data, a model was developed to predict compressive strength based on the proportion of crystalline phases present in the mortars, the surface area and the water/binder ratio. This model, applied to the studied mortars, was found to predict, with low error, the measured performance of the mortars, meaning that the model can be used as tool to predict mortar strength. The outcomes of this thesis demonstrated that with sufficient knowledge of the underlying chemistry of NHL binders, it is possible to establish the relative performance of mortars, thus making the decision on which binder to use easier and safer for the historic fabric.
385

Evaluation of power consumption and trade-offs in 5G mobile communications networks

Alhumaima, Raad January 2017 (has links)
In this thesis, components and parameters based power models (PMs) are produced to measure the power consumption (PC) of cloud radio access network (CRAN) architecture. In components PM, the power figure of each component within C-RAN is evaluated. After, this model is parametrised such that the computation complexity of each component is converted to a straightforward, but accurate method, called parameterised PM. This model compares cooling and total PC of traditional LTE architecture with C-RAN. This comparison considered different parameters such as, utilised bandwidth, number of antenna, base band units (BBUs) and remote radio heads (RRHs). This model draws about 33% reduction in power. Next, this PC model is updated to serve and exhibit the cost of integrating software defined networks (SDNs) with C-RAN. Alongside, modelling the power cost of the control plane units in the core network (CN), such as serving gateway (SGW), packet gateway (PGW) and mobility management entity (MME). Although there is power cost, the proposed model shows the directions to mitigate it. Consequently, a simplified PM is proposed for virtualisation based C-RAN. In this model, the power cost of server virtualisation by hosting several virtual machines (VMs) is shown, in a time and cost effective way. The total reduction in the PC was about 75%, due to short-cutting the number of active servers in the network. Alongside, the latency cost due to such technique is modelled. Finally, to enable efficient virtualisation technology, live migrating the VMs amongst the servers is vital. However, this advantageous situation is concurrent with VM's migration time and power cost. Therefore, a model is proposed to calculate the power cost of VM's live migration, and shows the effect of such decision upon the total PC of the network/C-RAN. The proposed work converts the complexity of other proposed PMs, to a simplified and costless method. Concurrently, the time cost is added to the imposed virtualisation's time cost to formulate the total delay expected prior to these techniques' execution.
386

Discovering Network Control Vulnerabilities and Policies in Evolving Networks

Jermyn, Jill Louise January 2017 (has links)
The range and number of new applications and services are growing at an unprecedented rate. Computer networks need to be able to provide connectivity for these services and meet their constantly changing demands. This requires not only support of new network protocols and security requirements, but often architectural redesigns for long-term improvements to efficiency, speed, throughput, cost, and security. Networks are now facing a drastic increase in size and are required to carry a constantly growing amount of heterogeneous traffic. Unfortunately such dynamism greatly complicates security of not only the end nodes in the network, but also of the nodes of the network itself. To make matters worse, just as applications are being developed at faster and faster rates, attacks are becoming more pervasive and complex. Networks need to be able to understand the impact of these attacks and protect against them. Network control devices, such as routers, firewalls, censorship devices, and base stations, are elements of the network that make decisions on how traffic is handled. Although network control devices are expected to act according to specifications, there can be various reasons why they do not in practice. Protocols could be flawed, ambiguous or incomplete, developers could introduce unintended bugs, or attackers may find vulnerabilities in the devices and exploit them. Malfunction could intentionally or unintentionally threaten the confidentiality, integrity, and availability of end nodes and the data that passes through the network. It can also impact the availability and performance of the control devices themselves and the security policies of the network. The fast-paced evolution and scalability of current and future networks create a dynamic environment for which it is difficult to develop automated tools for testing new protocols and components. At the same time, they make the function of such tools vital for discovering implementation flaws and protocol vulnerabilities as networks become larger and more complex, and as new and potentially unrefined architectures become adopted. This thesis will present the design, implementation, and evaluation of a set of tools designed for understanding implementation of network control nodes and how they react to changes in traffic characteristics as networks evolve. We will first introduce Firecycle, a test bed for analyzing the impact of large-scale attacks and Machine-to-Machine (M2M) traffic on the Long Term Evolution (LTE) network. We will then discuss Autosonda, a tool for automatically discovering rule implementation and finding triggering traffic features in censorship devices. This thesis provides the following contributions: 1. The design, implementation, and evaluation of two tools to discover models of network control nodes in two scenarios of evolving networks, mobile network and censored internet 2. First existing test bed for analysis of large-scale attacks and impact of traffic scalability on LTE mobile networks 3. First existing test bed for LTE networks that can be scaled to arbitrary size and that deploys traffic models based on real traffic traces taken from a tier-1 operator 4. An analysis of traffic models of various categories of Internet of Things (IoT) devices 5. First study demonstrating the impact of M2M scalability and signaling overload on the packet core of LTE mobile networks 6. A specification for modeling of censorship device decision models 7. A means for automating the discovery of features utilized in censorship device decision models, comparison of these models, and their rule discovery
387

Financování schodku státního rozpočtu prostřednictvím emise dluhopisů / Financing government deficits by emission of government bonds

Schiller, Jan January 2011 (has links)
The aim of this thesis is to point out recent development in the field of debt creation, its concordance with academic practice and to outline feasible utilization of financial modeling in the area of government deficits. The effort is to put institutional operation of debt management into context of recent history of financial markets and to verify its success. The process of debt portfolio management with use of advanced financial tools is shown on the sample of Czech debt manager. From the observation of the overall environment we can state the effort to develop efficient domestic debt market and the conception of long-term strategies based on risk management principles and to draw a set of specific recommendations applicable both to local and general conditions.
388

Disponibilité à long terme des ressources mondiales d'uranium / Long-term availability of global uranium resources

Monnet, Antoine 02 November 2016 (has links)
Dans une perspective mondiale de décarbonisation de la production énergétique et de croissance de la production d’électricité d’origine nucléaire, la disponibilité des ressources d’uranium est un enjeu majeur. Les technologies futures qui permettront aux réacteurs nucléaires de s’affranchir de l’uranium naturel mettront du temps à être pleinement déployées. Nous analysons donc les conditions de disponibilité de l’uranium au XXIe siècle. Les deux premières sont liées au coût de production : ce sont l’accessibilité technique et l’intérêt économique. Nous les étudions en modélisant les ressources ultimes d’uranium (quantités découvertes et non découvertes) et leurs coûts. Cette méthode s’appuie sur un découpage régional du monde, la connaissance actuelle des gisements et un filtre économique. Elle permet d’établir une courbe d’offre de long terme où les quantités d’uranium techniquement accessibles sont fonction du coût de production. Les principales incertitudes de ces estimations ont été identifiées et l’on montre qu’en l’absence de découpage régional, les ressources ultimes sont sous-estimées. Les autres conditions de disponibilité de l’uranium prises en compte sont liées aux dynamiques de marché que crée la confrontation de l’offre et de la demande. Nous les étudions en les modélisant sous la forme de contraintes dynamiques dans un modèle de marché en équilibre partiel. Ce modèle est déterministe et les acteurs y sont représentés par région. Il permet de tenir compte, par exemple, de la corrélation à court terme entre le prix et les dépenses d’exploration, qui fait l’objet d’une étude économétrique spécifique. À plus long terme, les contraintes modélisées incluent l’anticipation de la demande par les consommateurs et de la raréfaction progressive des ressources ultimes les moins chères. Par une série de simulations prospectives, nous montrons que le rythme de croissance de la demande d’uranium au XXIe siècle et son anticipation ont une forte influence sur la hausse du prix à long terme. À l’inverse, les incertitudes liées à l’estimation des ressources ultimes ont une influence limitée. Nous soulignons également l’évolution inégale du poids des différentes régions dans la production mondiale. Enfin, certaines variations de l’offre (arrêt de la production d’une région par exemple) ou de la demande (croissance irrégulière ou introduction de nouvelles technologies) ont également une influence significative sur l’évolution du prix à long terme ou sa cyclicité. / From a global perspective, a low-carbon path to development driven by a growth of nuclear power production raises issues about the availability of uranium resources. Future technologies allowing nuclear reactors to overcome the need for natural uranium will take time to fully deploy. To address these issues, we analyze the conditions of availability of uranium in the 21st century.The first two conditions are technical accessibility and economic interest, both related to the cost of production. We study them using a model that estimates the ultimate uranium resources (amounts of both discovered and undiscovered resources) and their costs. This model splits the world into regions and the resource estimate for each region derives from the present knowledge of the deposits and economic filtering. The output is a long-term supply curve that illustrates the quantities of uranium that are technically accessible as a function of their cost of production. We identify the main uncertainties of these estimates and we show that with no regional breakdown, the ultimate resources are underestimated.The other conditions of availability of uranium covered in our study are related to the market dynamics, i.e. they derive from the supply and demand clearing mechanism. To assess their influence, they are introduced as dynamic constraints in a partial equilibrium model. This model of the uranium market is deterministic, and market players are represented by regions. For instance, it takes into account the short-term correlation between price and exploration expenditures, which is the subject of a dedicate econometric study. In the longer term, constraints include anticipation of demand by consumers and a gradual depletion of the cheapest ultimate resources.Through a series of prospective simulations, we demonstrate the strong influence on long term-price trends of both the growth rate of demand during the 21st century and its anticipation. Conversely, the uncertainties related to the estimation of ultimate resources have limited influence. We also underline the uneven evolution of market shares between regions. Finally, particular changes in supply (production shutdown in one of the regions, for example) or in demand (irregular growth or introduction of new technology) also have a significant influence on the evolution of the long-term price or its cyclicity.
389

O modelo Weibull Modificado Exponenciado de Longa Duração aplicado à sobrevida do câncer de mama / Exponentiated Modified Weibull model for Long-Term survivors applied on breast cancer survival

Hayala Cristina Cavenague de Souza 04 May 2015 (has links)
O câncer de mama é a neoplasia mundialmente mais incidente em mulheres, representando a causa mais frequente de morte feminina por câncer, excetuando-se os tumores de pele não melanoma. O conhecimento da dinâmica de óbitos ao logo do tempo em pacientes com tal neoplasia é de grande importância para auxílio na definição de tratamentos e de políticas de prevenção. Modelos de risco que contemplem parâmetros com referência a situações de longa duração e diferentes funções de risco podem ser úteis nesse contexto. O objetivo desta dissertação é investigar as propriedades de um particular modelo, o modelo Weibull Modificado Exponenciado de Longa Duração (WMELD), para aplicação na avaliação de risco e sobrevida de mulheres diagnosticadas com câncer de mama atendidas no Hospital das Clínicas de Ribeirão Preto, São Paulo. As propriedades avaliadas neste estudo consideraram métodos de estimação pontual de Máxima Verossimilhança e estimação intervalar via teoria assintótica, reamostragem bootstrap e verossimilhança perlada. Critérios de seleção de modelos foram considerados: Teste de Razão de Verossimilhanças (TRV), critério de Akaike (AIC) e critério de Informação de Bayes (BIC), bem como métodos gráficos para avaliar a qualidade do ajuste do modelo: gráfico TTT na presença de censuras com intervalo de confiança bootstrap paramétrico. Foram realizados estudos de simulação de Monte Carlo em diferentes cenários do modelo WMELD, considerando Vício, Erro Quadrático Médio (EQM) e Custo dos estimadores pontuais, Probabilidade de Cobertura e Amplitude Média dos intervalos de confiança. Em relação ao estudo das propriedades do modelo, as estimativas pontuais de máxima verossimilhança apresentaram vício e EQM baixos e mais próximos de zero quanto maior o tamanho amostral e menor a proporção de pacientes imunes. Os intervalos construídos com base em reamostragem bootstrap mostraram-se mais adequados em relação à probabilidade de cobertura e amplitude média, com vantagem para o bootstrap paramétrico. AIC e TRV alcançaram poder discriminativo superior ao BIC, porém os três métodos apresentam-se defasados para pequenos tamanhos amostrais e valores dos parâmetros próximos do valor de nulidade. Os métodos de inferência com melhor desempenho nesse estudo foram considerados para avaliar os fatores associados ao risco e sobrevida de pacientes com câncer de mama atendidas no HCFMRP. Com o ajuste do modelo WMELD, mostraram-se associados à sobrevida os fatores: Estadiamento, Faixa Etária e Quantidade de tratamentos. A sobrevida em oito anos ou mais foi maior quanto menor o estadiamento e os óbitos ocorreram de forma mais acelerada ao longo do tempo em estadiamentos avançados. Pacientes com menos de 35 anos de idade nos estadiamentos II e III e com mais de 75 anos no estadiamento III têm menor sobrevida do que as pacientes com 35 a 75 anos. Pacientes que realizaram menos tratamentos nos estadiamentos III ou IV vão a óbito mais rapidamente do que pacientes que zeram mais tratamentos, porém a sobrevida após oito ou mais anos é igual nos dois grupos. Adicionalmente, e fundamental no contexto da clínica médica, o modelo WMELD apresenta interpretações relevantes em relação a seus parâmetros na dinâmica do processo de ocorrência de óbitos ao longo do tempo. Verificamos que os parâmetros , e p levam informações sobre o tempo de vida, já os parâmetros, e descrevem o comportamento do risco de óbito. / Breast cancer is the world\'s most common cancer in women, representing the most frequent cause of female death from cancer, except for non-melanoma skin tumors. Knowledge of the death dynamics over time in patients with such cancer is very important to support definition of treatments and prevention policies. Hazard models that include parameters with reference to long-term situations and dierent hazard functions can be useful in this context. This paper aims to investigate the properties of a particular model, Exponentiated Modified Weibull Model for long-term survivors (EMWLT), for use in risk of death and survival assessment of women diagnosed with breast cancer and treated at Hospital das Clínicas de Ribeirão Preto (HCFMRP), São Paulo. The properties evaluated in this study considered point estimation methods of Maximum Likelihood and interval estimation through asymptotic theory, bootstrap resampling and profile likelihood. Model selection criteria were considered: Likelihood Ratio Test (LRT), Akaike Criterion (AIC) and Bayesian Information Criterion (BIC), as well graphical methods to assess the quality of the model fit: TTT plot in the presence of censorship with an parametric bootstrap confidence interval. Monte Carlo simulation studies were performed in diferent model\'s scenarios considering Bias, Mean Square Error (MSE) and Cost of point estimators, Coverage Probability and Average Size of confidence intervals. Regarding the study of model properties, the point estimates of maximum likelihood showed lower and closer to zero bias and MSE the larger the sample size and the lower the proportion of immune patients. The intervals constructed based on bootstrap resampling seemed more appropriated in relation to the coverage probability and average size, advantageously the parametric bootstrap. AIC and LRT reached a higher discriminative power than BIC; however, all of these three methods seemed lagged for small sample sizes and close to null values of parameters. The inference methods with better performance in this study were considered to evaluate the factors associated with risk of death and survival in patients with breast cancer treated at HCFMRP. By adjusting the EMWLT model, the following were associated to survival: Staging, Age Group and Number of treatments. The survival of eight years or more was higher as the lower the staging was; and the deaths occurred more rapidly over time in advanced staging. Patients under 35 years old in stages II and III and older than 75 years in staging III had lower survival than patients aged 35 to 75 years. Patients who underwent fewer treatments in staging III or IV die earlier than patients who underwent more treatments, but survival after eight years or more is equal in both groups. In addition, the EMWLT model showed to be fundamental in clinical medicine presenting relevant interpretations regarding its parameters in the dynamics of the process of occurrence of deaths over time. We verified that the parameters , and p have information about the lifetime, on the other hand the parameters, and describe the risk of death behavior.
390

Longitudinal Changes in Strength and Explosive Performance Characteristics in NCAA Division I Women’s Volleyball Athletes

Kavanaugh, Ashley A. 01 May 2014 (has links)
The purpose of this dissertation was to determine if a periodized strength and conditioning program resulted in long-term adaptations in NCAA Division I women’s volleyball athletes, and if these changes related to the team’s competitive performance. Specifically, this dissertation serves to: 1.) describe the changes in body composition and performance variables of 2 female volleyball athletes over a 4-year collegiate career, 2.) determine the degree and magnitude of change in performance variables after about 1, 2, and 3 years of periodized resistance training, and 3.) infer if volleyball performance characteristics are related to a team’s competitive success. The following are major findings of this dissertation. 1.) Positive changes in vertical jump height, strength, and explosiveness may be possible throughout 4 years of collegiate volleyball training even with increased body mass and percent body fat. Moreover, impaired ability to perform heavy lower-body resistance training exercises due to chronic injury negatively impacts long-term physical performance adaptations over 4 years. 2.) A combination of traditional resistance training exercises and weightlifting variations at various loads, in addition to volleyball practice, appear to be effective at increasing maximal strength by 44% and vertical jump height by 20%-30% in NCAA Division I women’s volleyball athletes after about two and half years of training. Furthermore, these characteristics can be improved in the absence of additional plyometric training outside of normal volleyball-specific practice. 3.) A rating percentage index RPI ranking ratio and unweighted match score ratio appear to be better predictors of overall team competitive season success than a weighted match score ratio. On the contrary, a weighted match score ratio may be better for determining an association between team match performance and volleyball-specific fitness. A considerable amount of research is needed to develop a volleyball-specific performance index that best quantifies team performance and whether or not a measurable association exists between improved fitness characteristics and increased overall team competitive success. The findings of this dissertation provide evidence that analyzing and monitoring volleyball-related performance variables over time can assist the sport performance group in making training based decisions as well as promote the successful development of an athlete.

Page generated in 0.0399 seconds