• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 447
  • 103
  • 99
  • 49
  • 43
  • 20
  • 17
  • 14
  • 11
  • 10
  • 7
  • 7
  • 6
  • 6
  • 4
  • Tagged with
  • 943
  • 165
  • 128
  • 106
  • 100
  • 96
  • 94
  • 94
  • 92
  • 88
  • 80
  • 73
  • 70
  • 70
  • 67
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Imaging Pain And Brain Plasticity: A Longitudinal Structural Imaging Study

Bishop, James Hart 01 January 2017 (has links)
Chronic musculoskeletal pain is a leading cause of disability worldwide yet the mechanisms of chronification and neural responses to effective treatment remain elusive. Non-invasive imaging techniques are useful for investigating brain alterations associated with health and disease. Thus the overall goal of this dissertation was to investigate the white (WM) and grey matter (GM) structural differences in patients with musculoskeletal pain before and after psychotherapeutic intervention: cognitive behavioral therapy (CBT). To aid in the interpretation of clinical findings, we used a novel porcine model of low back pain-like pathophysiology and developed a post-mortem, in situ, neuroimaging approach to facilitate translational investigation. The first objective of this dissertation (Chapter 2) was to identify structural brain alterations in chronic pain patients compared to healthy controls. To achieve this, we examined GM volume and diffusivity as well as WM metrics of complexity, density, and connectivity. Consistent with the literature, we observed robust differences in GM volume across a number of brain regions in chronic pain patients, however, findings of increased GM volume in several regions are in contrast to previous reports. We also identified WM changes, with pain patients exhibiting reduced WM density in tracts that project to descending pain modulatory regions as well as increased connectivity to default mode network structures, and bidirectional alterations in complexity. These findings may reflect network level dysfunction in patients with chronic pain. The second aim (Chapter 3) was to investigate reversibility or neuroplasticity of structural alterations in the chronic pain brain following CBT compared to an active control group. Longitudinal evaluation was carried out at baseline, following 11-week intervention, and a four-month follow-up. Similarly, we conducted structural brain assessments including GM morphometry and WM complexity and connectivity. We did not observe GM volumetric or WM connectivity changes, but we did discover differences in WM complexity after therapy and at follow-up visits. To facilitate mechanistic investigation of pain related brain changes, we used a novel porcine model of low back pain-like pathophysiology (Chapter 6). This model replicates hallmarks of chronic pain, such as soft tissue injury and movement alteration. We also developed a novel protocol to perform translational post-mortem, in situ, neuroimaging in our porcine model to reproduce WM and GM findings observed in humans, followed by a unique perfusion and immersion fixation protocol to enable histological assessment (Chapter 4). In conclusion, our clinical data suggest robust structural brain alterations in patients with chronic pain as compared to healthy individuals and in response to therapeutic intervention. However, the mechanism of these brain changes remains unknown. Therefore, we propose to use a porcine model of musculoskeletal pain with a novel neuroimaging protocol to promote mechanistic investigation and expand our interpretation of clinical findings.
162

Experimental Analysis of the Effects of Manipulations in Weighted Voting Games

Lasisi, Ramoni Olaoluwa 01 August 2013 (has links)
Weighted voting games are classic cooperative games which provide compact representation for coalition formation models in human societies and multiagent systems. As useful as weighted voting games are in modeling cooperation among players, they are, however, not immune from the vulnerability of manipulations (i.e., dishonest behaviors) by strategic players that may be present in the games. With the possibility of manipulations, it becomes difficult to establish or maintain trust, and, more importantly, it becomes difficult to assure fairness in such games. For these reasons, we conduct careful experimental investigations and analyses of the effects of manipulations in weighted voting games, including those of manipulation by splitting, merging, and annexation . These manipulations involve an agent or some agents misrepresenting their identities in anticipation of gaining more power or obtaining a higher portion of a coalition's profits at the expense of other agents in a game. We consider investigation of some criteria for the evaluation of game's robustness to manipulation. These criteria have been defined on the basis of theoretical and experimental analysis. For manipulation by splitting, we provide empirical evidence to show that the three prominent indices for measuring agents' power, Shapley-Shubik, Banzhaf, and Deegan-Packel, are all susceptible to manipulation when an agent splits into several false identities. We extend a previous result on manipulation by splitting in exact unanimity weighted voting games to the Deegan-Packel index, and present new results for excess unanimity weighted voting games. We partially resolve an important open problem concerning the bounds on the extent of power that a manipulator may gain when it splits into several false identities in non-unanimity weighted voting games. Specifically, we provide the first three non-trivial bounds for this problem using the Shapley-Shubik and Banzhaf indices. One of the bounds is also shown to be asymptotically tight. Furthermore, experiments on non-unanimity weighted voting games show that the three indices are highly susceptible to manipulation via annexation while they are less susceptible to manipulation via merging. Given that the problems of calculating the Shapley-Shubik and Banzhaf indices for weighted voting games are NP-complete, we show that, when the manipulators' coalitions sizes are restricted to a small constant, manipulators need to do only a polynomial amount of work to find a much improved power gain for both merging and annexation, and then present two enumeration-based pseudo-polynomial algorithms that manipulators can use. Finally, we argue and provide empirical evidence to show that despite finding the optimal beneficial merge is an NP-hard problem for both the Shapley-Shubik and Banzhaf indices, finding beneficial merge is relatively easy in practice. Also, while it appears that we may be powerless to stop manipulation by merging for a given game, we suggest a measure, termed quota ratio, that the game designer may be able to control. Thus, we deduce that a high quota ratio decreases the number of beneficial merges.
163

Human Body Motions Optimization for Able-Bodied Individuals and Prosthesis Users During Activities of Daily Living Using a Personalized Robot-Human Model

Menychtas, Dimitrios 16 November 2018 (has links)
Current clinical practice regarding upper body prosthesis prescription and training is lacking a standarized, quantitative method to evaluate the impact of the prosthetic device. The amputee care team typically uses prior experiences to provide prescription and training customized for each individual. As a result, it is quite challenging to determine the right type and fit of a prosthesis and provide appropriate training to properly utilize it early in the process. It is also very difficult to anticipate expected and undesired compensatory motions due to reduced degrees of freedom of a prosthesis user. In an effort to address this, a tool was developed to predict and visualize the expected upper limb movements from a prescribed prosthesis and its suitability to the needs of the amputee. It is expected to help clinicians make decisions such as choosing between a body-powered or a myoelectric prosthesis, and whether to include a wrist joint. To generate the motions, a robotics-based model of the upper limbs and torso was created and a weighted least-norm (WLN) inverse kinematics algorithm was used. The WLN assigns a penalty (i.e. the weight) on each joint to create a priority between redundant joints. As a result, certain joints will contribute more to the total motion. Two main criteria were hypothesized to dictate the human motion. The first one was a joint prioritization criterion using a static weighting matrix. Since different joints can be used to move the hand in the same direction, joint priority will select between equivalent joints. The second criterion was to select a range of motion (ROM) for each joint specifically for a task. The assumption was that if the joints' ROM is limited, then all the unnatural postures that still satisfy the task will be excluded from the available solutions solutions. Three sets of static joint prioritization weights were investigated: a set of optimized weights specifically for each task, a general set of static weights optimized for all tasks, and a set of joint absolute average velocity-based weights. Additionally, task joint limits were applied both independently and in conjunction with the static weights to assess the simulated motions they can produce. Using a generalized weighted inverse control scheme to resolve for redundancy, a human-like posture for each specific individual was created. Motion capture (MoCap) data were utilized to generate the weighting matrices required to resolve the kinematic redundancy of the upper limbs. Fourteen able-bodied individuals and eight prosthesis users with a transradial amputation on the left side participated in MoCap sessions. They performed ROM and activities of daily living (ADL) tasks. The methods proposed here incorporate patient's anthropometrics, such as height, limb lengths, and degree of amputation, to create an upper body kinematic model. The model has 23 degrees-of-freedom (DoFs) to reflect a human upper body and it can be adjusted to reflect levels of amputation. The weighting factors resulted from this process showed how joints are prioritized during each task. The physical meaning of the weighting factors is to demonstrate which joints contribute more to the task. Since the motion is distributed differently between able-bodied individuals and prosthesis users, the weighting factors will shift accordingly. This shift highlights the compensatory motion that exist on prosthesis users. The results show that using a set of optimized joint prioritization weights for each specific task gave the least RMS error compared to common optimized weights. The velocity-based weights had a slightly higher RMS error than the task optimized weights but it was not statistically significant. The biggest benefit of that weight set is their simplicity to implement compared to the optimized weights. Another benefit of the velocity based weights is that they can explicitly show how mobile each joint is during a task and they can be used alongside the ROM to identify compensatory motion. The inclusion of task joint limits gave lower RMS error when the joint movements were similar across subjects and therefore the ROM of each joint for the task could be established more accurately. When the joint movements were too different among participants, the inclusion of task limits was detrimental to the simulation. Therefore, the static set of task specific optimized weights was found to be the most accurate and robust method. However, the velocity-based weights method was simpler with similar accuracy. The methods presented here were integrated in a previously developed graphical user interface (GUI) to allow the clinician to input the data of the prospective prosthesis users. The simulated motions can be presented as an animation that performs the requested task. Ultimately, the final animation can be used as a proposed kinematic strategy that a prosthesis user and a clinician can refer to, during the rehabilitation process as a guideline. This work has the potential to impact current prosthesis prescription and training by providing personalized proposed motions for a task.
164

Construction of lattice rules for multiple integration based on a weighted discrepancy

Sinescu, Vasile January 2008 (has links)
High-dimensional integrals arise in a variety of areas, including quantum physics, the physics and chemistry of molecules, statistical mechanics and more recently, in financial applications. In order to approximate multidimensional integrals, one may use Monte Carlo methods in which the quadrature points are generated randomly or quasi-Monte Carlo methods, in which points are generated deterministically. One particular class of quasi-Monte Carlo methods for multivariate integration is represented by lattice rules. Lattice rules constructed throughout this thesis allow good approximations to integrals of functions belonging to certain weighted function spaces. These function spaces were proposed as an explanation as to why integrals in many variables appear to be successfully approximated although the standard theory indicates that the number of quadrature points required for reasonable accuracy would be astronomical because of the large number of variables. The purpose of this thesis is to contribute to theoretical results regarding the construction of lattice rules for multiple integration. We consider both lattice rules for integrals over the unit cube and lattice rules suitable for integrals over Euclidean space. The research reported throughout the thesis is devoted to finding the generating vector required to produce lattice rules that have what is termed a low weighted discrepancy . In simple terms, the discrepancy is a measure of the uniformity of the distribution of the quadrature points or in other settings, a worst-case error. One of the assumptions used in these weighted function spaces is that variables are arranged in the decreasing order of their importance and the assignment of weights in this situation results in so-called product weights . In other applications it is rather the importance of group of variables that matters. This situation is modelled by using function spaces in which the weights are general . In the weighted settings mentioned above, the quality of the lattice rules is assessed by the weighted discrepancy mentioned earlier. Under appropriate conditions on the weights, the lattice rules constructed here produce a convergence rate of the error that ranges from O(n−1/2) to the (believed) optimal O(n−1+δ) for any δ gt 0, with the involved constant independent of the dimension.
165

Quality of Service in Ad Hoc Networks by Priority Queuing / Tjänstekvalitet i ad hoc nät med köprioritering

Tronarp, Otto January 2003 (has links)
<p>The increasing usage of information technology in military affairs raises the need for robust high capacity radio networks. The network will be used to provide several different types of services, for example group calls and situation awareness services. All services have specific demands on packet delays and packet losses in order to be fully functional, and therefore there is a need for a Quality of Service (QoS) mechanism in the network. </p><p>In this master thesis we examine the possibility to provide a QoS mechanism in Ad Hoc networks by using priority queues. The study includes two different queuing schemes, namely fixed priority queuing and weighted fair queuing. The performance of the two queuing schemes are evaluated and compared with respect to the ability to provide differentiation in network delay, i.e., provide high priority traffic with lower delays than low priority traffic. The study is mainly done by simulations, but for fixed priority queuing we also derive a analytical approximation of the network delay. </p><p>Our simulations show that fixed priority queuing provides a sharp delay differentiation between service classes, while weighted fair queuing gives the ability to control the delay differentiation. One of those queuing schemes alone might not be the best solution for providing QoS, instead we suggest that a combination of them is used.</p>
166

Frame Allocation and Scheduling for Relay Networks in the LTE Advanced Standard

Roth, Stefan January 2010 (has links)
<p>The use of relays is seen as a promising way to extend cell coverage and increase rates in LTE Advanced networks. Instead of increasing the number of base stations (BS), relays with lower cost could provide similar gains. A relay will have a wireless link to the closest BS as only connection to the core network and will cover areas close to the cell edge or other areas with limited rates.</p><p>Performing transmissions in several hops (BS-relay & relay-user) requires more radio resources than using direct transmission. This thesis studies how the available radio resources should be allocated between relays and users in order to maximize throughput and/or fairness. Time and frequency multiplexed backhaul is investigated under a full buffer traffic assumption. It is shown that the system will be backhaul limited and that the two ways of multiplexing will perform equally when maximising throughput and/or fairness. The analysis results in a set of throughput/fairness suboptimal solutions, dependant on how many relays are used per cell. The results are verified by simulations, which also show the limiting effects on throughput caused by interference between relays.</p><p>It is also analysed how the resource allocation should be done given non-fullbuffer traffic. A resource allocation that minimises packet delay given a certain number of relays per cell is presented. The analysis is based on queuing theory.</p><p>Finally some different schedulers and their suitability for relay networks are discussed. Simulation results are shown, comparing the throughput and fairness of Round Robin, Weighted Round Robin, Proportional Fairness and Weighted Proportional Fairness schemes. It is shown that allocating the resource among the relays according to the number of users served by the relays improves the fairness.</p>
167

Kappa — A Critical Review

Xier, Li January 2010 (has links)
<p>The Kappa coefficient is widely used in assessing categorical agreement between two raters or two methods. It can also be extended to more than two raters (methods).  When using Kappa, the shortcomings of this coefficient should be not neglected.  Bias and prevalence effects lead to paradoxes of Kappa. These problems can be avoided by using some other indexes together, but the solutions of the Kappa problems are not satisfactory. This paper gives a critical survey concerning the Kappa coefficient and gives a real life example. A useful alternative statistical approach, the Rank-invariant method is also introduced, and applied to analyze the disagreement between two raters.</p>
168

Active Learning with Statistical Models

Cohn, David A., Ghahramani, Zoubin, Jordan, Michael I. 21 March 1995 (has links)
For many types of learners one can compute the statistically 'optimal' way to select data. We review how these techniques have been used with feedforward neural networks. We then show how the same principles may be used to select data for two alternative, statistically-based learning architectures: mixtures of Gaussians and locally weighted regression. While the techniques for neural networks are expensive and approximate, the techniques for mixtures of Gaussians and locally weighted regression are both efficient and accurate.
169

Carleson-type inequalitites in harmonically weighted Dirichlet spaces

Chacon Perez, Gerardo Roman 01 May 2010 (has links)
Carleson measures for Harmonically Weighted Dirichlet Spaces are characterized. It is shown a version of a maximal inequality for these spaces. Also, Interpolating Sequences and Closed-Range Composition Operators are studied in this context.
170

Development of a branch and price approach involving vertex cloning to solve the maximum weighted independent set problem

Sachdeva, Sandeep 12 April 2006 (has links)
We propose a novel branch-and-price (B&P) approach to solve the maximum weighted independent set problem (MWISP). Our approach uses clones of vertices to create edge-disjoint partitions from vertex-disjoint partitions. We solve the MWISP on sub-problems based on these edge-disjoint partitions using a B&P framework, which coordinates sub-problem solutions by involving an equivalence relationship between a vertex and each of its clones. We present test results for standard instances and randomly generated graphs for comparison. We show analytically and computationally that our approach gives tight bounds and it solves both dense and sparse graphs quite quickly.

Page generated in 0.0432 seconds