611 |
Countering the culture of silence: promoting medical apology as a route to an ethic of careWilford, Dempsey 29 August 2019 (has links)
This thesis investigates the impact of apology hesitance on medical relationships after an error occurs. Literature suggests that medical personnel are reluctant to apologize because an apology suggests legal liability, violates the drive to provide perfect care that is expected of medical personnel and reinforced during medical education, and violates the certainty over bodies and maladies expected of medical personnel. I suggest that a culture of silence, a pattern of conduct embedded in medical culture, encourages apprehensiveness towards apology and responsibility in the face of error. Despite the fear of litigation, ‘Apology Act’ legislation shields apologizers from having their apology used against them in court, and literature suggests that apologizing following an error benefits doctors by restoring conscience and confidence, assists in the healing of patients and families and restores trust in their relationship with their health care provider, and refines the practice of medicine by addressing how the error occurred.
I present two arguments in this thesis. First, I argue that a culture of silence has serious negative impacts on medical relationships and the safe provision of medical care as a whole by obstructing responsibility, apology, and preventing the discussion and correction of conduct that led to the error. Medical personnel who refuse to apologize, or provide an apology that is conditional, instrumental or otherwise of poor-quality leaves their relationship with patients and families in jeopardy. Further, by not apologizing, medical personnel obstruct their own ethical and moral development and obscure the origin and conditions surrounding the error, potentially jeopardizing the safety of future patients.
Second, I argue that the medical culture of silence should be replaced by a culture that embraces apology. Doing so would permit medical culture to draw from care ethics, the principles of which are appropriate to responding to, maintaining, and repairing relationships that have experienced damage. The emphasis that care ethics places on maintaining and repairing relationships is especially coherent with apologies that seek to morally engage with the victim, promise non-repetition, and establish a proper record of events. Further, care ethics offers normative recommendations for conduct to respond to and repair relationships, provides inroads to refining notions of human security and safety, and is particularly attuned to interrogating dynamics of power within relationships, dynamics that can limit the potential for and impact of apology.
This thesis offers the Tainted Blood Scandal of the 1980s and 90s as a case study. The provision of contaminated blood and blood product resulted in thousands of Canadians becoming infected with Human Immunodeficiency Virus and Hepatitis C. Through this case, I show that the actions of public health officials, the Red Cross, and healthcare providers reflected a culture of silence that sought to avoid and dispute attributions of responsibility by victims, blood activists, and the public. This is the culture that this thesis in its advocacy of apology seeks to challenge. / Graduate
|
612 |
La mal llamada "Tentativa del Sujeto Inidóneo" como delito putativoOlave Albertini, Alejandra January 2018 (has links)
Tesis (Magister en derecho con mención en ciencias del derecho) / El presente trabajo tiene como objetivo proponer una solución a los casos de la llamada “tentativa del sujeto inidóneo”, casos en los que un sujeto se representa erróneamente circunstancias tales que, de ser verdaderas, darían lugar a que detentara cierto estatus exi-gido por la norma para la consumación de un delito de aquéllos que se clasifican tradicio-nalmente como “delitos especiales”. La pregunta que se pretende responder, entonces, es si en estos casos se está ante una tentativa de delito o ante lo que se conoce como delito putativo. Se abordará el problema en dos partes. En primer lugar se analizarán diversas concepciones sobre la tentativa con la finalidad de elaborar un concepto de tentativa con-sistente con en el modelo de teoría de las normas en el que se enmarca esta investigación, ello permitirá evaluar si los casos analizados podrían llegar a ser considerados casos de tentativa. En segundo lugar se analizará la estructura de las normas especiales con la fina-lidad de determinar si sus posibles particularidades tienen repercusión en las propuestas de solución a los casos analizados.
|
613 |
Sustainable Investment Strategies : A Quantitative Evaluation of Sustainable Investment Strategies For Index FundsErikmats, John, Sjösten, Johan January 2019 (has links)
Modern society is faced with the complex and intractable challenge of global warming, along with other environmental issues that could potentially alter our way of life if not managed properly. Is it possible that financial markets and equity investors could have a huge part to play in the transformation towards a greener and more sustainable world? Previous studies about investment strategies regarding sustainability have for the most part been centered around possibly less objective ESG-scores or around carbon and GHG-emissions only, with little or no consideration for water usage and waste management. This thesis aims to amend to the previous work on carbon reducing strategies and ESG-investing with the addition of water usage and waste management, especically using raw data of these measures instead of ESG-ratings. Index replicating portfolios have become more and more popular as it proves harder and harder to beat the index, offering good returns along with cheap and uncomplicated portfolio construction and management. In a trending market, the fear of missing out and the demand for market return can make an index replicating strategy a way for investors to have market exposure but still remain diversied and without confusion about which horses to bet on. This thesis studies the relationship between tracking-error and the increase of sustainability in a portfolio through reduction of the intensity of carbon emissions, water usages and poor waste management. To be able to make a fair comparison, these measures are normalized by dividing each measure by the reported annual revenue. These three obtained intensities are then implemented individually, as well as all together into index replicating portfolios in order to study the effect from decreasing them. First and foremost we study the effect on the tracking-error, but also the effects on returns and volatility. We also study the effect on liquidity and turnover in the portfolios to show that it is possible to implement extensive sustainability increasing methods into an index replication equity portfolio. We follow the UCITS-directory to avoid overweightin specic companies and only allow the portfolios to overweight a sector with maximum 2%, in order to avoid an unwanted exposure to sectors with naturally lower intensities. The portfolios are obtained by using a multi-factor risk model to predict the expected statistical behaviour in relation to the chosen factors. Followed by applying Markowitz Modern Portfolio Theory through a convex optimization problem with the objective function to minimize tracking-error. All displayed portfolios had stable and convex optimization and were compliant with the UCITS-directory. We limited our study to only North American stocks and chose the index "MCSI NA" to replicate. Only stocks that were a part of the index were allowed to invest in and we did not allow negative weights for any stocks. The portfolios were constructed and backtested for the period 2014-12-01 until 2019-03-01 with rebalancing quarterly at the same points in time that the index is rebalanced by MCSI. We found that it was possible to implement extensive sustainability considerations into the portfolios and still keep a high correlation with the index whilst keeping low tracking-errors. We believe that most index replicating investors should be able to implement reductions of above mentioned intensities of about 40-60% without compromising tracking-errors,returns and volatility too much. We found evidence that during this time and in this market our low-intensities portfolios would have overperformed the index. We also found that returns increased and volatility decreased as we increased the reduction of each individual measure and all three collectively. Reducing carbon intensity seemed to drive positive returns and lower volatility the most, but we also observed apositive effect from reduction of all intensities. Our belief before conducting this study was that sustainability should have a negative effect on returns due to the limitation of the feasible area of investing. This motivated us to build portfolios with intent to makeup for these lesser returns and hopefully "beat the index". This failed in almost all cases and the only way we were able to beat the index were through implementing sustainability in our portfolios.
|
614 |
Modelos de otimização para o erro de rastreamento em carteiras de investimento. / Optimization models for tracking error in investment portfolios.Oliveira, Estela Mara de 09 October 2014 (has links)
Neste trabalho apresentam-se modelos de erro de rastreamento que sao estrategias utilizadas pelos administradores de carteiras de investimento que visam montar portfolios para seguir algum ndice de referencia (benchmark). Denomina-se nesses casos de erro de rastreamento a diferenca entre o retorno da carteira que se deseja montar e o retorno da carteira de referencia. Propoe-se um modelo de minimizacao da variancia do erro de rastreamento para um excesso de retorno esperado xado para carteiras de investimento com ativos que tenham alta liquidez, alem de apresentar os modelos de media variancia de Markowitz ([16]) e de erro de rastreamento de Roll ([1], [19]). O trabalho e concluido com a analise graca dos modelos, onde observa-se que o modelo com restricao de liquidez nos ativos apresenta bons resultados. / This work shows the tracking error models that are strategies used by portfolios investment managers in order to build (construct) portfolios to follow (track) some benchmark. It is denominated in those cases the tracking error to the dierence between the return in the portfolio wanted and the benchmark return. It is proposed a minimization model of the tracking error variance for some excess of the expected return xed to investment portfolios with assets that have high liquidity, besides to show the Markowitz mean variance models [17] and the Roll tracking error models [1, 19]. The work ends with a graphical analysis of the models, where it is observed that the model with liquidity constraints in assets shows good results.
|
615 |
Cyclic probabilistic reasoning networks: some exactly solvable iterative error-control structures.January 2001 (has links)
Wai-shing Lee. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 114). / Abstracts in English and Chinese. / Contents --- p.i / List of Figures --- p.iv / List of Tables --- p.v / Abstract --- p.vi / Acknowledgement --- p.vii / Chapter Chapter 1. --- Layout of the thesis --- p.1 / Chapter Chapter 2. --- Introduction --- p.3 / Chapter 2.1 --- What is the reasoning problem? --- p.3 / Chapter 2.2 --- Fundamental nature of Knowledge --- p.4 / Chapter 2.3 --- Fundamental methodology of Reasoning --- p.7 / Chapter 2.4 --- Our intended approach --- p.9 / Chapter Chapter 3. --- Probabilistic reasoning networks --- p.11 / Chapter 3.1 --- Overview --- p.11 / Chapter 3.2 --- Causality and influence diagrams --- p.11 / Chapter 3.3 --- Bayesian networks - influence diagrams endowed with a probability interpretation --- p.13 / Chapter 3.3.1 --- A detour to the interpretations of probability --- p.13 / Chapter 3.3.2 --- Bayesian networks --- p.15 / Chapter 3.3.3 --- Acyclicity and global probability --- p.17 / Chapter 3.4 --- Reasoning on probabilistic reasoning networks I - local updating formulae --- p.17 / Chapter 3.4.1 --- Rationale of the intended reasoning strategy --- p.18 / Chapter 3.4.2 --- Construction of the local updating formula --- p.19 / Chapter 3.5 --- Cluster graphs - another perspective to reasoning problems --- p.23 / Chapter 3.6 --- Semi-lattices - another representation of Cluster graphs --- p.26 / Chapter 3.6.1 --- Construction of semi-lattices --- p.26 / Chapter 3.7 --- Bayesian networks and semi-lattices --- p.28 / Chapter 3.7.1 --- Bayesian networks to acyclic semi-lattices --- p.29 / Chapter 3.8 --- Reasoning on (acyclic) probabilistic reasoning networks II - global updating schedules --- p.29 / Chapter 3.9 --- Conclusion --- p.30 / Chapter Chapter 4. --- Cyclic reasoning networks - a possibility? --- p.32 / Chapter 4.1 --- Overview --- p.32 / Chapter 4.2 --- A meaningful cyclic structure - derivation of the ideal gas law --- p.32 / Chapter 4.3 --- "What's ""wrong"" to be in a cyclic world" --- p.35 / Chapter 4.4 --- Communication - Dynamics - Complexity --- p.39 / Chapter 4.4.1 --- Communication as dynamics; dynamics to complexity --- p.42 / Chapter 4.5 --- Conclusion --- p.42 / Chapter Chapter 5. --- Cyclic reasoning networks ´ؤ error-control application --- p.43 / Chapter 5.1 --- Overview --- p.43 / Chapter 5.2 --- Communication schemes on cyclic reasoning networks directed to error-control applications --- p.43 / Chapter 5.2.1 --- Part I ´ؤ Local updating formulae --- p.44 / Chapter 5.2.2 --- Part II - Global updating schedules across the network --- p.46 / Chapter 5.3 --- Probabilistic reasoning based error-control schemes --- p.47 / Chapter 5.3.1 --- Local sub-universes and global universe underlying the error- control structure --- p.47 / Chapter 5.4 --- Error-control structure I --- p.48 / Chapter 5.4.1 --- Decoding algorithm - Communication between local sub- universes in compliance with the global topology --- p.51 / Chapter 5.4.2 --- Decoding rationales --- p.55 / Chapter 5.4.3 --- Computational results --- p.55 / Chapter 5.5 --- Error-control structure II --- p.57 / Chapter 5.5.1 --- Structure of the code and the corresponding decoding algorithm --- p.57 / Chapter 5.5.2 --- Computational results --- p.63 / Chapter 5.6 --- Error-control structure III --- p.66 / Chapter 5.6.1 --- Computational results --- p.70 / Chapter 5.7 --- Error-control structure IV --- p.71 / Chapter 5.7.1 --- Computational results --- p.73 / Chapter 5.8 --- Conclusion --- p.74 / Chapter Chapter 6. --- Dynamics on cyclic probabilistic reasoning networks --- p.75 / Chapter 6.1 --- Overview --- p.75 / Chapter 6.2 --- Decoding rationales --- p.76 / Chapter 6.3 --- Error-control structure I - exact solutions --- p.77 / Chapter 6.3.1 --- Dynamical invariant - a key to tackle many dynamical problems --- p.77 / Chapter 6.3.2 --- Dynamical invariant for error-control structure I --- p.78 / Chapter 6.3.3 --- Iteration dynamics --- p.79 / Chapter 6.3.4 --- Structure preserving property and the maximum a posteriori solutions --- p.86 / Chapter 6.4 --- Error-control structures III & IV - exact solutions --- p.92 / Chapter 6.4.1 --- Error-control structure III --- p.92 / Chapter 6.4.1.1 --- Dynamical invariants for error-control structure III --- p.92 / Chapter 6.4.1.2 --- Iteration dynamics --- p.93 / Chapter 6.4.2 --- Error-control structure IV --- p.96 / Chapter 6.4.3 --- Structure preserving property and the maximum a posteriori solutions --- p.98 / Chapter 6.5 --- Error-control structure II - exact solutions --- p.101 / Chapter 6.5.1 --- Iteration dynamics --- p.102 / Chapter 6.5.2 --- Structure preserving property and the maximum a posteriori solutions --- p.105 / Chapter 6.6 --- A comparison on the four error-control structures --- p.106 / Chapter 6.7 --- Conclusion --- p.108 / Chapter Chapter 7. --- Conclusion --- p.109 / Chapter 7.1 --- Our thesis --- p.109 / Chapter 7.2 --- Hind-sights and foresights --- p.110 / Chapter 7.3 --- Concluding remark --- p.111 / Appendix A. An alternative derivation of the local updating formula --- p.112 / Bibliography --- p.114
|
616 |
On density theorems, connectedness results and error bounds in vector optimization.January 2001 (has links)
Yung Hon-wai. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 133-139). / Abstracts in English and Chinese. / Chapter 0 --- Introduction --- p.1 / Chapter 1 --- Density Theorems in Vector Optimization --- p.7 / Chapter 1.1 --- Preliminary --- p.7 / Chapter 1.2 --- The Arrow-Barankin-Blackwell Theorem in Normed Spaces --- p.14 / Chapter 1.3 --- The Arrow-Barankin-Blackwell Theorem in Topolog- ical Vector Spaces --- p.27 / Chapter 1.4 --- Density Results in Dual Space Setting --- p.32 / Chapter 2 --- Density Theorem for Super Efficiency --- p.45 / Chapter 2.1 --- Definition and Criteria for Super Efficiency --- p.45 / Chapter 2.2 --- Henig Proper Efficiency --- p.53 / Chapter 2.3 --- Density Theorem for Super Efficiency --- p.58 / Chapter 3 --- Connectedness Results in Vector Optimization --- p.63 / Chapter 3.1 --- Set-valued Maps --- p.64 / Chapter 3.2 --- The Contractibility of the Efficient Point Sets --- p.67 / Chapter 3.3 --- Connectedness Results in Vector Optimization Prob- lems --- p.83 / Chapter 4 --- Error Bounds In Normed Spaces --- p.90 / Chapter 4.1 --- Error Bounds of Lower Semicontinuous Functionsin Normed Spaces --- p.91 / Chapter 4.2 --- Error Bounds of Lower Semicontinuous Convex Func- tions in Reflexive Banach Spaces --- p.100 / Chapter 4.3 --- Error Bounds with Fractional Exponents --- p.105 / Chapter 4.4 --- An Application to Quadratic Functions --- p.114 / Bibliography --- p.133
|
617 |
The role of prediction error in probabilistic associative learningCevora, Jiri January 2018 (has links)
This thesis focuses on probabilistic associative learning. One of the classic effects in this field is the stimulus associability effect for which I derive a statistically optimal inference model and a corresponding approximation that addresses a number of problems with the original account of Mackintosh. My proposed account of associability - a variable learning rate depending on a relative informativeness of stimuli - also accounts of the classic blocking effect \cite{kamin1969predictability} without the need for Prediction Error [PE] computation. Given that blocking was the main impetus for placing PE at the centre of learning theories, I critically re-evaluate other evidence for PE in learning, particularly the recent neuroimaging evidence. I conclude that the brain data are not as clear cut as often presumed. The main shortcoming of the evidence implicating PE in learning is that probabilistic associative learning is mostly described as a transition from one state of belief to another, yet those beliefs are typically observed only after multiple learning episodes and in a very coarse manner. To address this problem, I develop an experimental paradigm and accompanying statistical methods that allow one to infer the beliefs at any given point in time. However, even with the rich data provided by this new paradigm, the blocking effect still cannot provide conclusive evidence for the role of PE in learning. I solve this problem by deriving a novel conceptualisation of learning as a flow in probability space. This allows me to derive two novel effects that can unambiguously distinguish learning that is driven by PE from learning not driven by PE. I call these effectsgeneralized blocking and false blocking, given their inspiration by the original paradigm of Kamin (1969). These two effects can be generalized to the entirety of probability space, rather than just the two specific points provided by the paradigms used by Mackintosh and Kamin, and therefore offer greater sensitivity to differences in learning mechanisms. In particular, I demonstrate that these effects are necessary consequences of PE-driven learning, but not learning based on the relative informativeness of stimuli. Lastly I develop an online experiment to acquire data on the new paradigm from a large number (approximately 2000) of participants recruited via social media. The results of model fitting, together with statistical tests of generalized blocking and false blocking, provide strong evidence against a PE-driven account of learning, instead favouring the relative informativeness account derived at the start of the thesis.
|
618 |
High-Performance Decoder Architectures For Low-Density Parity-Check CodesZhang, Kai 09 January 2012 (has links)
The Low-Density Parity-Check (LDPC) codes, which were invented by Gallager back in 1960s, have attracted considerable attentions recently. Compared with other error correction codes, LDPC codes are well suited for wireless, optical, and magnetic recording systems due to their near- Shannon-limit error-correcting capacity, high intrinsic parallelism and high-throughput potentials. With these remarkable characteristics, LDPC codes have been adopted in several recent communication standards such as 802.11n (Wi-Fi), 802.16e (WiMax), 802.15.3c (WPAN), DVB-S2 and CMMB. This dissertation is devoted to exploring efficient VLSI architectures for high-performance LDPC decoders and LDPC-like detectors in sparse inter-symbol interference (ISI) channels. The performance of an LDPC decoder is mainly evaluated by area efficiency, error-correcting capability, throughput and rate flexibility. With this work we investigate tradeoffs between the four performance aspects and develop several decoder architectures to improve one or several performance aspects while maintaining acceptable values for other aspects. Firstly, we present a high-throughput decoder design for the Quasi-Cyclic (QC) LDPC codes. Two new techniques are proposed for the first time, including parallel layered decoding architecture (PLDA) and critical path splitting. Parallel layered decoding architecture enables parallel processing for all layers by establishing dedicated message passing paths among them. The decoder avoids crossbar-based large interconnect network. Critical path splitting technique is based on articulate adjustment of the starting point of each layer to maximize the time intervals between adjacent layers, such that the critical path delay can be split into pipeline stages. Furthermore, min-sum and loosely coupled algorithms are employed for area efficiency. As a case study, a rate-1/2 2304-bit irregular LDPC decoder is implemented using ASIC design in 90 nm CMOS process. The decoder can achieve an input throughput of 1.1 Gbps, that is, 3 or 4 times improvement over state-of-art LDPC decoders, while maintaining a comparable chip size of 2.9 mm^2. Secondly, we present a high-throughput decoder architecture for rate-compatible (RC) LDPC codes which supports arbitrary code rates between the rate of mother code and 1. While the original PLDA is lack of rate flexibility, the problem is solved gracefully by incorporating the puncturing scheme. Simulation results show that our selected puncturing scheme only introduces the BER performance degradation of less than 0.2dB, compared with the dedicated codes for different rates specified in the IEEE 802.16e (WiMax) standard. Subsequently, PLDA is employed for high throughput decoder design. As a case study, a RC- LDPC decoder based on the rate-1/2 WiMax LDPC code is implemented in CMOS 90 nm process. The decoder can achieve an input throughput of 975 Mbps and supports any rate between 1/2 and 1. Thirdly, we develop a low-complexity VLSI architecture and implementation for LDPC decoder used in China Multimedia Mobile Broadcasting (CMMB) systems. An area-efficient layered decoding architecture based on min-sum algorithm is incorporated in the design. A novel split-memory architecture is developed to efficiently handle the weight-2 submatrices that are rarely seen in conventional LDPC decoders. In addition, the check-node processing unit is highly optimized to minimize complexity and computing latency while facilitating a reconfigurable decoding core. Finally, we propose an LDPC-decoder-like channel detector for sparse ISI channels using belief propagation (BP). The BP-based detection computationally depends on the number of nonzero interferers only and are thus more suited for sparse ISI channels which are characterized by long delay but a small fraction of nonzero interferers. Layered decoding algorithm, which is popular in LDPC decoding, is also adopted in this paper. Simulation results show that the layered decoding doubles the convergence speed of the iterative belief propagation process. Exploring the special structure of the connections between the check nodes and the variable nodes on the factor graph, we propose an effective detector architecture for generic sparse ISI channels to facilitate the practical application of the proposed detection algorithm. The proposed architecture is also reconfigurable in order to switch flexible connections on the factor graph in the time-varying ISI channels.
|
619 |
Applying the "Split-ADC" Architecture to a 16 bit, 1 MS/s differential Successive Approximation Analog-to-Digital ConverterChan, Ka Yan 30 April 2008 (has links)
Successive Approximation (SAR) analog-to-digital converters are used extensively in biomedical applications such as CAT scan due to the high resolution they offer. Capacitor mismatch in the SAR converter is a limiting factor for its accuracy and resolution. Without some form of calibration, a SAR converter can only achieve 10 bit accuracy. In industry, the CAL-DAC approach is a popular approach for calibrating the SAR ADC, but this approach requires significant test time. This thesis applies the“Split-ADC" architecture with a deterministic, digital, and background self-calibration algorithm to the SAR converter to minimize test time. In this approach, a single ADC is split into two independent halves. The two split ADCs convert the same input sample and produce two output codes. The ADC output is the average of these two output codes. The difference between these two codes is used as a calibration signal to estimate the errors of the calibration parameters in a modified Jacobi method. The estimates are used to update calibration parameters are updated in a negative feedback LMS procedure. The ADC is fully calibrated when the difference signal goes to zero on average. This thesis focuses on the specific implementation of the“Split-ADC" self-calibrating algorithm on a 16 bit, 1 MS/s differential SAR ADC. The ADC can be calibrated with 105 conversions. This represents an improvement of 3 orders of magnitude over existing statistically-based calibration algorithms. Simulation results show that the linearity of the calibrated ADC improves to within ±1 LSB.
|
620 |
Analyzing and Modeling Low-Cost MEMS IMUs for use in an Inertial Navigation SystemBarrett, Justin Michael 30 April 2014 (has links)
Inertial navigation is a relative navigation technique commonly used by autonomous vehicles to determine their linear velocity, position and orientation in three-dimensional space. The basic premise of inertial navigation is that measurements of acceleration and angular velocity from an inertial measurement unit (IMU) are integrated over time to produce estimates of linear velocity, position and orientation. However, this process is a particularly involved one. The raw inertial data must first be properly analyzed and modeled in order to ensure that any inertial navigation system (INS) that uses the inertial data will produce accurate results. This thesis describes the process of analyzing and modeling raw IMU data, as well as how to use the results of that analysis to design an INS. Two separate INS units are designed using two different micro-electro-mechanical system (MEMS) IMUs. To test the effectiveness of each INS, each IMU is rigidly mounted to an unmanned ground vehicle (UGV) and the vehicle is driven through a known test course. The linear velocity, position and orientation estimates produced by each INS are then compared to the true linear velocity, position and orientation of the UGV over time. Final results from these experiments include quantifications of how well each INS was able to estimate the true linear velocity, position and orientation of the UGV in several different navigation scenarios as well as a direct comparison of the performances of the two separate INS units.
|
Page generated in 0.0408 seconds