11 |
Objective surgical skill evaluationAnderson, Fraser 11 1900 (has links)
It is essential for surgeons to have their skill evaluated prior to entering the oper- ating room. Most evaluation methods currently in use are subjective, relying on human judgment to assess trainees. Recently, sensors have been used to track the positions of instruments and the forces applied to them by surgeons, opening up the possibility of automated skill analysis. This thesis presents a newly developed recording system, and novel methods used to automatically analyze surgical skill within the context of laparoscopic procedures. The evaluation methods are tested using an empirical study involving a number of participants with a wide range of surgical skill.
|
12 |
Developing Breeding Objectives for Targhee SheepBorg, Randy Charles 29 June 2004 (has links)
Breeding objectives were developed for Targhee sheep at different levels of prolificacy and triplet survival. Economic weights (EW) were derived for estimated breeding values (BV) from National Sheep Improvement Program genetic evaluations for 120 d weaning weight (WW), maternal milk (MM), yearling weight (YW), fleece weight (FW), fiber diameter (FD), staple length (SL), and prolificacy (PLC; lambs born/100 ewes lambing). A commercial flock was simulated, accounting for nonlinear relationships between performance and profit. Ewes were assumed mated to sires of specified BV and profit was derived from lifetime performance of lambs and replacement females from that lamb crop. Economic weights were determined as change in profit from use of sires with BV that were one additive standard deviation above the mean for each trait [1.98 kg for WW, 1.62 kg for MM, 2.90 kg for YW, 0..36 kg for FW, 0.99 microns for FD, 0.74 cm for SL, and 17.58 lambs/100 ewes for LC], while holding all other BV at breed average. Separate breeding objectives were derived for different ways of meeting increased nutrient needs (P = purchase hay, R = rent pasture, and L= limited flock size) and for different market lamb values (D = discounting lamb value for heavy weights, ND = no discount for heavy lambs). Based on replicated simulations, relative EW did not vary with prolificacy or triplet survival (P > 0.15) but were affected by feed costs and lamb market values (P < 0.01). Selection indexes were derived within and across simulated scenarios, and correlation (r) among indexes of > 0.90 indicated that an index could be used across multiple scenarios with little loss of selection efficiency. Indexes derived within feed cost scenarios (P, R, and L) and lamb value scenarios (D, ND) were strongly intercorrelated (r > 0.97). Correlations among average indexes for feed cost scenarios (0.97 for R and P, 0.70 for R and L; 0.85 for P and L) indicated that two feed cost scenarios could be used depending on whether winter forage was limited (L) or not (NL). The correlation between average indexes for these two scenarios was 0.78. Indexes were presented for combinations of feed cost and lamb value scenarios. Two indexes were suggested, representing the scenarios that apply to a large portion of Targhee producers. These indexes were for discounting heavy lambs with limited winter forage (D-L: 1.0 WW + 0.14 MM __ 0.76 YW + 1.22 FW __ 0.36 FD - 0.09 SL + 0.25 LC) and discounting heavy lambs with additional available forage (D-NL: 1.0 WW + 0.24 MM __ 0.34 YW + 1.65 FW __ 0.41 FD - 0.14 SL + 0.33 LC). For a standardized selection differential of one for the index, the expected changes in mean index value were $2.17 and $1.92 per ewe per generation for D-L and D-NL, respectively. / Master of Science
|
13 |
An Evolutionary Algorithm For Multiple Criteria ProblemsSoylu, Banu 01 January 2007 (has links) (PDF)
In this thesis, we develop an evolutionary algorithm for approximating the Pareto frontier of multi-objective continuous and combinatorial optimization problems. The algorithm tries to evolve the population of solutions towards the Pareto frontier and distribute it over the frontier in order to maintain a well-spread representation. The fitness score of each solution is computed with a Tchebycheff distance function and non-dominating sorting approach. Each solution chooses its own favorable weights according to the Tchebycheff distance function. Some seed solutions at initial population and a crowding measure also help to achieve satisfactory results.
In order to test the performance of our evolutionary algorithm, we use some continuous and combinatorial problems. The continuous test problems taken from the literature have special difficulties that an evolutionary algorithm has to deal with. Experimental results of our algorithm on these problems are provided.
One of the combinatorial problems we address is the multi-objective knapsack problem. We carry out experiments on test data for this problem given in the literature.
We work on two bi-criteria p-hub location problems and propose an evolutionary algorithm to approximate the Pareto frontiers of these problems. We test the performance of our algorithm on Turkish Postal System (PTT) data set (TPDS), AP (Australian Post) and CAB (US Civil Aeronautics Board) data sets.
The main contribution of this thesis is in the field of developing a multi-objective evolutionary algorithm and applying it to a number of multi-objective continuous and combinatorial optimization problems.
|
14 |
A Study on Aggregation of Objective Functions in MaOPs Based on Evaluation CriteriaFuruhashi, Takeshi, Yoshikawa, Tomohiro, Otake, Shun January 2010 (has links)
Session ID: TH-E1-4 / SCIS & ISIS 2010, Joint 5th International Conference on Soft Computing and Intelligent Systems and 11th International Symposium on Advanced Intelligent Systems. December 8-12, 2010, Okayama Convention Center, Okayama, Japan
|
15 |
A bi-objective home care scheduling problem: Analyzing the trade-off between costs and client inconvenienceBraekers, Kris, Hartl, Richard F., Parragh, Sophie, Tricoire, Fabien January 2016 (has links) (PDF)
Organizations providing home care services are inclined to optimize their activities in order to meet the constantly increasing demand for home care. In this context, home care providers are confronted with multiple, often conflicting, objectives such as minimizing their operating costs
while maximizing the service level offered to their clients by taking into account their preferences. This paper is the first to shed some light on the trade-off relationship between these two objectives by modeling the home care routing and scheduling problem as a bi-objective problem. The proposed model accounts for qualifications, working regulations and overtime costs of the nurses, travel costs depending on the mode of transportation, hard time windows, and client preferences on visit times and nurses. A distinguishing characteristic of the problem is that the scheduling problem for a single route is a biobjective problem in itself, thereby complicating the problem considerably. A metaheuristic algorithm, embedding a large neighborhood search heuristic in a multi-directional local search framework, is proposed to solve the problem.
Computational experiments on a set of benchmark instances based on reallife data are presented. A comparison with exact solutions on small instances shows that the algorithm performs well. An analysis of the results reveals that service providers face a considerable trade-off between costs and client convenience. However, starting from a minimum cost solution, the average service level offered to the clients may already be improved drastically with limited additional costs. (authors' abstract)
|
16 |
3D multiple description coding for error resilience over wireless networksUmar, Abubakar Sadiq January 2011 (has links)
Mobile communications has gained a growing interest from both customers and service providers alike in the last 1-2 decades. Visual information is used in many application domains such as remote health care, video –on demand, broadcasting, video surveillance etc. In order to enhance the visual effects of digital video content, the depth perception needs to be provided with the actual visual content. 3D video has earned a significant interest from the research community in recent years, due to the tremendous impact it leaves on viewers and its enhancement of the user’s quality of experience (QoE). In the near future, 3D video is likely to be used in most video applications, as it offers a greater sense of immersion and perceptual experience. When 3D video is compressed and transmitted over error prone channels, the associated packet loss leads to visual quality degradation. When a picture is lost or corrupted so severely that the concealment result is not acceptable, the receiver typically pauses video playback and waits for the next INTRA picture to resume decoding. Error propagation caused by employing predictive coding may degrade the video quality severely. There are several ways used to mitigate the effects of such transmission errors. One widely used technique in International Video Coding Standards is error resilience. The motivation behind this research work is that, existing schemes for 2D colour video compression such as MPEG, JPEG and H.263 cannot be applied to 3D video content. 3D video signals contain depth as well as colour information and are bandwidth demanding, as they require the transmission of multiple high-bandwidth 3D video streams. On the other hand, the capacity of wireless channels is limited and wireless links are prone to various types of errors caused by noise, interference, fading, handoff, error burst and network congestion. Given the maximum bit rate budget to represent the 3D scene, optimal bit-rate allocation between texture and depth information rendering distortion/losses should be minimised. To mitigate the effect of these errors on the perceptual 3D video quality, error resilience video coding needs to be investigated further to offer better quality of experience (QoE) to end users. This research work aims at enhancing the error resilience capability of compressed 3D video, when transmitted over mobile channels, using Multiple Description Coding (MDC) in order to improve better user’s quality of experience (QoE). Furthermore, this thesis examines the sensitivity of the human visual system (HVS) when employed to view 3D video scenes. The approach used in this study is to use subjective testing in order to rate people’s perception of 3D video under error free and error prone conditions through the use of a carefully designed bespoke questionnaire.
|
17 |
Optimal distributed generation planning based on NSGA-II and MATPOWERZamani, Iman January 2015 (has links)
The UK and the world are moving away from central energy resource to distributed generation (DG) in order to lower carbon emissions. Renewable energy resources comprise a big percentage of DGs and their optimal integration to the grid is the main attempt of planning/developing projects with in electricity network. Feasibility and thorough conceptual design studies are required in the planning/development process as most of the electricity networks are designed in a few decades ago, not considering the challenges imposed by DGs. As an example, the issue of voltage rise during steady state condition becomes problematic when large amount of dispersed generation is connected to a distribution network. The efficient transfer of power out or toward the network is not currently an efficient solution due to phase angle difference of each network supplied by DGs. Therefore optimisation algorithms have been developed over the last decade in order to do the planning purpose optimally to alleviate the unwanted effects of DGs. Robustness of proposed algorithms in the literature has been only partially addressed due to challenges of power system problems such multi-objective nature of them. In this work, the contribution provides a novel platform for optimum integration of distributed generations in power grid in terms of their site and size. The work provides a modified non-sorting genetic algorithm (NSGA) based on MATPOWER (for power flow calculation) in order to find a fast and reliable solution to optimum planning. The proposed multi-objective planning tool, presents a fast convergence method for the case studies, incorporating the economic and technical aspects of DG planning from the planner‟s perspective. The proposed method is novel in terms of power flow constraints handling and can be applied to other energy planning problems.
|
18 |
RETROSPECTIVE APPROXIMATION ALGORITHMS FOR MULTI-OBJECTIVE SIMULATION OPTIMIZATION ON INTEGER LATTICESKyle Cooper (6482990) 10 June 2019 (has links)
We consider multi-objective simulation optimization (MOSO) problems, that is, nonlinear optimization problems in which multiple simultaneous objective functions can only be observed with stochastic error, e.g., as output from a Monte Carlo simulation model. In this context, the solution to a MOSO problem is the efficient set, which is the set of all feasible decision points for which no other feasible decision<br>point is at least as good on all objectives and strictly better on at least one objective. We are concerned primarily with MOSO problems on integer lattices, that is, MOSO<br><div>problems where the feasible set is a subset of an integer lattice. <br></div><div><br></div><div>In the first study, we propose the Retrospective Partitioned Epsilon-constraint with Relaxed Local Enumeration (R-PεRLE) algorithm to solve the bi-objective simulation optimization problem on integer lattices. R-PεRLE is designed for sampling efficiency. It uses a retrospective approximation (RA) framework to repeatedly call<br></div>the PεRLE sample-path solver at a sequence of increasing sample sizes, using the solution from the previous RA iteration as a warm start for the current RA iteration.<br>The PεRLE sample-path solver is designed to solve the sample-path problem only to within a tolerance commensurate with the sampling error. It comprises a call to<br>each of the Pε and RLE algorithms, in sequence. First, Pε searches for new points to add to the sample-path local efficient set by solving multiple constrained single-<br>objective optimization problems. Pε places constraints to locate new sample-path local efficient points that are a function of the standard error away, in the objective space, from those already obtained. Then, the set of sample-path local efficient points found by Pε is sent to RLE, which is a local crawling algorithm that ensures the set is a sample-path approximate local efficient set. As the number of RA iterations increases, R-PεRLE provably converges to a local efficient set with probability one under appropriate regularity conditions. We also propose a naive, provably-convergent<br>benchmark algorithm for problems with two or more objectives, called R-MinRLE. R-MinRLE is identical to R-PεRLE except that it replaces the Pε algorithm with an<br>algorithm that updates one local minimum on each objective before invoking RLE. R-PεRLE performs favorably relative to R-MinRLE and the current state of the art, MO-COMPASS, in our numerical experiments. Our work points to a family of<br><div>RA algorithms for MOSO on integer lattices that employ RLE for certification of a sample-path approximate local efficient set, and for which the convergence guarantees are provided in this study.</div><div><br></div><div>In the second study, we present the PyMOSO software package for solving multi-objective simulation optimization problems on integer lattices, and for implementing<br></div>and testing new simulation optimization (SO) algorithms. First, for solving MOSO problems on integer lattices, PyMOSO implements R-PεRLE and R-MinRLE, which<br>are developed in the first study. Both algorithms employ pseudo-gradients, are designed for sampling efficiency, and return solutions that, under appropriate regularity<br>conditions, provably converge to a local efficient set with probability one as the simulation budget increases. PyMOSO can interface with existing simulation software and<br>can obtain simulation replications in parallel. Second, for implementing and testing new SO algorithms, PyMOSO includes pseudo-random number stream management,<br>implements algorithm testing with independent pseudo-random number streams run in parallel, and computes the performance of algorithms with user-defined metrics.<br>For convenience, we also include an implementation of R-SPLINE for problems with one objective. The PyMOSO source code is available under a permissive open source<br>license.
|
19 |
Stage acoustics for symphony orchestras in concert hallsDammerud, Jens Jørgen January 2009 (has links)
No description available.
|
20 |
Multi-objective optimization approaches to efficiency assessment and target setting for bank branchesXu, Cong January 2018 (has links)
This thesis focuses on combining data envelopment analysis (DEA) and multi-objective linear programming (MOLP) methods to set targets by referencing peers' performances and decision-makers' (DMs) preferences. A large number of past papers have proven the importance of a company having a target; however, obtaining a feasible but challenging target has always been a difficult topic for companies. Since DEA was proposed in 1978, it has become one of the most popular performance assessment tools. The performance possibility set and efficient frontier established by DEA provide solid and scientific reference information for managers to evaluate an individual's efficiency. Based on the successful experience of DEA in performance assessment, many scholars have mentioned that DEA can be used to set appropriate targets as well; however, traditional DEA models do not include DMs' preference information that is crucial to a target-setting process. Therefore, several MOLP methods have been introduced to include DMs' preferences in the target-setting process based on the DEA efficient frontier and performance possibility set. The trade-off-based method is one of the most popular interactive methods that have been incorporated with DEA. However, there are several gaps in the current research: (1) the trade-off-based method could take so many interactions that no DMs could finish the interactive process; (2) DMs might find it very difficult to provide the preference information required by MOLP models; and (3) DMs cannot have an intuitive view in terms of the efficient frontier. Regarding the gaps above, this thesis proposes three new trade-off-based interactive target-setting models based on the DEA performance possibility set and efficient frontier to improve DMs' experience when setting targets. The three models can work independently or can be combined during the decision-making process. The piecewise linear model uses a piecewise linear assumption to simulate DMs' real utility function. It gradually narrows down the region that could contain DMs' most-preferred solution (MPS) until it reaches an acceptable range. This model could help DMs who have limited time for interaction but want to have a global view of the entire efficient frontier. This model has also been proven very helpful when DMs are not sensitive to close efficient solutions. The prioritized trade-off model provides a new way for a DM to know about the efficient frontier, which allows the DM to explore the efficient frontier following the preferred direction with a series of trade-off tables and trade-off figures as visual aids. The stepwise trade-off model focuses on situations where the number of objectives (outputs/inputs for the DEA model) is quite large and DMs cannot provide all indifference trade-offs between all the objectives simultaneously. To release the DMs' burden, the stepwise model starts from two objectives and gradually includes new objectives in the decision-making process, with the assumption that the indifference trade-offs between previous objectives are fixed, until all objectives are included. All three models have been validated through numerical examples and case studies of a Chinese state-owned bank to help DMs to explore their MPS in the DEA production possibility set.
|
Page generated in 0.0377 seconds