Spelling suggestions: "subject:"bplacement"" "subject:"deplacement""
41 |
Family building in adoptionPrynn, Barbara January 2000 (has links)
No description available.
|
42 |
Movement artefact rejection in impedance pneumographyKhambete, Niranjan D. January 2000 (has links)
Impedance pneumography is a non-invasive and a very convenient technique for monitoring breathing. However, a major drawback of this technique is that it is impossible to monitor breathing due to large artefacts introduced by the body movements. The aim of this project was to develop a technique for reducing these 'movement artefacts'. In the first stage of the project, experimental and theoretical studies were carried out to identify an 'optimum' electrode placement that would maximise the 'sensitivity' of measured thoracic impedance to lung resistivity changes. This maximum sensitivity was obtained when the drive and the receive electrode pairs were placed in two different horizontal planes. This sensitivity was also found to increase with increase in electrode spacing. In the second stage, the optimum electrode placement was used to record thoracic impedance during movements. Movement artefacts occurred only when the electrodes moved from their initial location along with the skin, during movements. Taking into consideration these observations, a strategy was decided for placing 4 electrodes in one plane so that movement artefacts could be reduced by combining the two independent measurements. Further studies showed that movement artefacts could be reduced using a strategic 6- electrode placement in three dimensions. It was also possible to detect obstructive apnoea, as the amplitude of the breathing signal was higher than that due to obstructive apnoea and this difference was statistically significant. In these studies, the main cause of movement artefacts was identified as the movement of electrodes with the skin. A significant reduction in movement artefacts was obtained using the 6-electrode placement. This advantage of the 6-electrode placement proposed in this project, can be of great use in clinical applications such as apnoea monitoring in neonates. Further studies can be carried out to determine an optimum frequency of injected current to achieve reduction in residual movement artefacts.
|
43 |
PAYMENT OF ADVANCED PLACEMENT EXAM FEES BY VIRGINIA PUBLIC SCHOOL DIVISIONS AND ITS IMPACT ON ADVANCED PLACEMENT ENROLLMENT AND SCORESCirillo, Mary Grupe 26 February 2010 (has links)
The purpose of this study was to determine the impact of Virginia school divisions’ policy of paying the fee for students to take Advanced Placement exams on Advanced Placement course enrollment, the number of Advanced Placement exams taken by students, the average scores earned and the percent of students earning qualifying scores of 3, 4, or 5 on Advanced Placement exams. The hierarchical regression models utilized Advanced Placement scores and school demographic data provided by the Virginia Department of Education combined with survey data on Advanced Placement policies and the number years that exam fees had been paid collected from school principals, directors of counseling and division officials. School level demographics considered in the analyses included school size, socioeconomic status, ethnicity and school achievement. Advanced Placement enrollment and the number of exams taken increased significantly over the period of study while the average scores and number of qualifying scores earned by Virginia students remained unchanged. The payment of exam fees by Virginia school divisions had no impact on the change in Advanced Placement participation. Average scores and percent of qualifying scores earned on Advanced Placement science exams fell over the study period, though participation grew in line with the overall Advanced Placement participation. No significant differences in the change in Advanced Placement participation or scores were observed based on the underserved minority enrollment of schools, however both enrollment, average scores and qualifying scores on Advanced Placement exams fell significantly as the percent of students qualified that for free and reduced-lunch programs at a school increased.
|
44 |
The relationship among personality, perception, and job preferenceBerger, Edward H. January 1966 (has links)
Thesis (Ph.D.)--Boston University / PLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you. / This study investigated the relationship between personality, perceived need satisfaction, and job area preference. Since the early 1950's theoretical interest in vocational psychology has focussed, in part, on the role of personality determinants in occupational choice. A number of major lines of research developed, primarily using early family environment, elements of psychoanalytic theory, and need-strength patterns as the personality dimensions. Although most studies have found significant differences among personality characteristics of persons in different occupations, there has been a paucity of any integrative theorizing to synthesize these results. Moreover, the relationship between personality and occupational choice is usually of low correlation, and many "external" demographic factors seem to be more important.
It was felt that, to overcome some of these objections, by decreasing the "conceptual distance" between the dependent and independent variables more powerful and intrinsic relationships could be found. To this end, instead of using occupational choice, job area preference was used as the dependent variable. Need strength, as measured by the Edwards Personal Preference Schedule, was the personality dimension. Perception of the area as need satisfying was a second independent variable. This was measured by a Nursing Questionnaire designed for this study.
The following three hypotheses were tested:
1. Job area preference is related to personality need strength.
2. Perception of need satisfaction in the most preferred area is related to
personality need strength.
3. Job area preference is related to the interaction of personality need strength
and perceived need satisfaction in the most preferred area.
The major interest of this study was to demonstrate that the interaction between personality and perceived need satisfaction is a better predictor of job area preference than either personality or perceived need satisfaction alone.
The sample consisted of 137 female, senior, student nurses enrolled in hospital-affiliated, diploma programs. Each nurse ranked twelve in-hospital nursing areas on the dimension of most to least preferred to work in as a first year staff nurse. These rankings were subjected to a factor analysis by which six job area preference groups were obtained. The Nursing Questionnaire consisted of twelve scales which were rated by 11 judges as to which needs they seem to reflect. These needs were congruent to 12 of the EPPS need scales.
Hypothesis I. was confirmed. Job area preference is related to personality need strength. On the EPPS this sample of nurses, as a whole, gave a significantly different profile from that of college women norms. Personality differences were found among certain preference groups.
Hypothesis II was confirmed. Perception of need satisfaction in the most preferred area is related to personality need strength.
Hypothesis III was confirmed. On the basis of a Multiple Group Discriminant Analysis the interaction between personality (as measured by the EPPS) and perceived need satisfaction (as measured by the Nursing Questionnaire) is a better predictor of job area preference than either personality or perception alone.
It would seem that job area preference is a useful variable in investigating vocational decision making. Combining personality and percept1on enhances the strength of the relationship between independent and dependent variables. Implications for future research and vocational guidance were discussed, in terms of using this paradigm to study other occupational fields with functional subareas, and in helping counselors guide students to look at occupational careers in terms of need-satisfaction. / 2031-01-01
|
45 |
Resilient controller placement problems in software defined wide-area networksTanha, Maryam 31 January 2019 (has links)
Software Defined Networking (SDN) is an emerging paradigm for network design and management. By providing network programmability and the separation of control and data planes, SDN offers salient features such as simplified and centralized management, reduced complexity, and accelerated innovation. Using SDN, the control and management of network devices are performed by centralized software, called controllers. In particular, Software-Defined Wide Area Networks (SD-WANs) have made considerable headway in recent years. However, SDN can be a double-edged sword with regard to network resilience. The great reliance of SDN on the logically centralized control plane has heightened the concerns of research communities and industries about the resilience of the control plane. Although the controller provides flexible and fine-grained resilience management features that contribute to faster and more efficient failure detection and containment in the network, it is the Achilles' heel of SDN resilience. The resilience of control plane has a great impact on the functioning of the whole system. The challenges associated with the resilience of the control plane should be addressed properly to benefit from SDN's unprecedented capabilities. This dissertation investigates the aforementioned issues by categorizing them into two groups. First, the resilient design of the control plane is studied. The resilience of the control plane is strongly linked to the Controller Placement Problem (CPP), which deals with the positioning and assignment of controllers to the forwarding devices. A resilient CPP needs to assign more than one controller to a switch while it satisfies certain Quality of Service (QoS) requirements. We propose a solution for such a problem that, unlike most of the former studies, takes both the switch-controller/inter-controller latency requirements and the capacity of the controllers into account to meet the traffic loads of switches. The proposed algorithms, one of which has a polynomial-time complexity, adopt a clique-based approach in graph theory to find high-quality solutions heuristically. Second, due to the high dynamics of SD-WANs in terms of variations in traffic loads of switches and the QoS requirements that further affect the incurred load on the controllers, adjustments to the controller placement are inevitable over time. Therefore, resilient switch reassignment and incremental controller placement are proposed to reuse the existing geographically distributed controllers as much as possible or make slight modifications to the controller placement. This assists the service providers in decreasing their operational and maintenance costs. We model these problems as variants of the problem of scheduling on parallel machines while considering the capacity of controllers, reassignment cost, and resiliency (which have not been addressed in the existing research work) and propose approximation algorithms to solve them efficiently. To sum up, CPP has a great impact on the resilience of SDN control plane and subsequently the correct functioning of the whole network. Therefore, tailored mechanisms to enhance the resiliency of the control plane should be applied not only at the design stage of SD-WANs but also during their lifespan to handle the dynamics and new requirements of such networks over time. / Graduate
|
46 |
The role of the audience in product placement: development of an audience engrossment scaleScott, Jane Margaret, Marketing, Australian School of Business, UNSW January 2008 (has links)
Product placement is now a US$7.76 billion industry, flourishing as advertisers attempt to combat audience sophistication, zipping, zapping, muting of commercials, TiVo, media multi-tasking, the Internet and digital television, all of which may signal the death knell of the interruptive commercial model. Yet whilst research on product placement is growing, it has not kept pace with the practice, and many findings do not converge across studies. This is likely the case because parameters remain undefined and there is no operational framework to describe how product placements are processed, and no agreement as to what effects are possible or how they should be examined. Most effects-based research has focussed on executional factors and what the product placement does to the audience member. This assumes that the recipient is a passive participant. However this thesis argues that the audience member is actually an active processor who should be the focus of research. This research distinguishes product placement from related activities and develops a new conceptual model of product placement processing. It puts a strong focus on the role of the audience member, stating that their level of familiarity of the placed brands, and their level of engrossment with the entertainment story will impact their recognition of product placements in that story. Applying Rasch Measurement Theory, an Audience Engrossment scale is developed and refined over four stages of data collection, with 1360 respondents across seven films, to capture the quality of people??s interaction with a film. The result is a scale comprising 19 feeling items, 10 arousal items, 6 appraisal items and 7 cognitive effort items. The scale was then tested as part of the conceptual model, with 191 participants watching The Island and completing questionnaires after the film relating to their recognition of brands within the film and their level of engrossment. Brand familiarity information was collected four weeks earlier. Onset prominence, high plot connection, dual modality and use by star were found to have the strongest direct effects on recognition, with brand familiarity and the four audience engrossment dimensions generally found to interact with the product placement characteristics as hypothesised.
|
47 |
Horizontal Well Placement Optimization in Gas Reservoirs Using Genetic AlgorithmsGibbs, Trevor Howard 2010 May 1900 (has links)
Horizontal well placement determination within a reservoir is a significant and difficult
step in the reservoir development process. Determining the optimal well location is a
complex problem involving many factors including geological considerations, reservoir
and fluid properties, economic costs, lateral direction, and technical ability. The most
thorough approach to this problem is that of an exhaustive search, in which a simulation
is run for every conceivable well position in the reservoir. Although thorough and
accurate, this approach is typically not used in real world applications due to the time
constraints from the excessive number of simulations.
This project suggests the use of a genetic algorithm applied to the horizontal well
placement problem in a gas reservoir to reduce the required number of simulations. This
research aims to first determine if well placement optimization is even necessary in a gas
reservoir, and if so, to determine the benefit of optimization. Performance of the genetic
algorithm was analyzed through five different case scenarios, one involving a vertical well and four involving horizontal wells. The genetic algorithm approach is used to
evaluate the effect of well placement in heterogeneous and anisotropic reservoirs on
reservoir recovery. The wells are constrained by surface gas rate and bottom-hole
pressure for each case.
This project's main new contribution is its application of using genetic algorithms to
study the effect of well placement optimization in gas reservoirs. Two fundamental
questions have been answered in this research. First, does well placement in a gas
reservoir affect the reservoir performance? If so, what is an efficient method to find the
optimal well location based on reservoir performance? The research provides evidence
that well placement optimization is an important criterion during the reservoir
development phase of a horizontal-well project in gas reservoirs, but it is less significant
to vertical wells in a homogeneous reservoir. It is also shown that genetic algorithms are
an extremely efficient and robust tool to find the optimal location.
|
48 |
An automatic optimization mechanism of circuit block partition for Fine-grain Multi-context Reconfigurable Process UnitChen, Jau-You 26 July 2006 (has links)
Due to the rapid development of today¡¦s multimedia communication systems, the complexity and scale of the systems increase day after day. For real-time computing of the systems which become more and more complicated, not only can we use VLSI chips, with the growth of manufacturing techniques of Integrated Circuit, we can apply the Reconfigurable Process Unit to improve real-time computing. Reconfigurable Process Unit is characterized by less cost in research and production as well as less time spent in research and development. Simultaneously, it processes more flexibility than VLSI chips and more suitability in taking advantageous position of real-time computing on an unspecified multimedia communication system. Fine-grain Multi-context Reconfigurable Process Unit has a mechanism of multi-context; therefore, it will take less time when the system reconfigures. This thesis deals with system environment of Computer-Aided Design under the structure of FMRPU, focusing on the placement and routing based on block partition method and designing an automatic optimization mechanism in accordance with historical records to elevate the rate of routable circuit.
With the spirit from various existing algorithm of circuit, we add the factors of block partition, which forms the implements of placement and routing based on block partition. Combined clustering and the limit caused by the hardware structure of FMRRPU, we can have an accurate block partition on FMRPU. Through the continual increase of historical records, the assessment for the upper limit of the argument of routable circuit will get closer to the actual figure. Simultaneously, after the Logic Block Partition, the probability of routable circuit will get great assurance, and the time consumed in lots of repetitious computing on un-routable circuit will decrease. The experimental result reveals that the modified placement cost function can obtain enormous improvement under the comparison with that mentioned the master thesis of Tzu-che Huang. Not only the routability steps up, the unnecessary consumption also reduces largely. In routing, the negotiated congestion-delay algorithm produced on the basis of the transformation of maze routing algorithm has great suitability in the operation on FMRPU, which has many optimization goals and limited routing resource. After the redefinition of the cost function and expenditure for routing, we can operate with accuracy and the time spent on the delayed circuit will decrease.
|
49 |
Optimal meter placement and transaction-based loss allocation in deregulated power system operationDing, Qifeng 17 February 2005 (has links)
In this dissertation topics of optimal meter placement and transaction-based loss allocation in deregulated power system operation are investigated.
Firstly, Chapter II introduces the basic idea of candidate measurement identification, which is the selection of candidate measurement sets, each of which will make the system observable under a given contingency (loss of measurements and network topology changes). A new method is then developed for optimal meter placement, which is the choice of the optimal combination out of the selected candidate measurement sets in order to ensure the entire system observability under any one of the contingencies.
A new method, which allows a natural separation of losses among individual transactions in a multiple-transaction setting is proposed in Chapter III. The proposed method does not use any approximations such as a D.C. power flow, avoiding method induced inaccuracies. The power network losses are expressed in terms of individual power transactions. A transaction-loss matrix, which illustrates the breakdown of losses introduced by each individual transaction and interactions between any two transactions, is created. The network losses can then be allocated to each transaction based on the transaction-loss matrix entries.
The conventional power flow analysis is extended in Chapter IV to combine with the transaction loss allocation. A systematic solution procedure is formed in order to adjust generation while simultaneously allocating losses to the generators designated by individual transactions.
Furthermore, Chapter V presents an Optimal Power Flow (OPF) algorithm to optimize the loss compensation if some transactions elect to purchase the loss service from the Independent System Operator (ISO) and accordingly the incurred losses are fairly allocated back to individual transactions. IEEE test systems have been used to verify the effectiveness of the proposed method.
|
50 |
Werbung als Unterhaltung : wie Branded Entertainment und Advertainment Werbung mit Unterhaltung verschmelzen /Schmalz, Jan Sebastian. January 2007 (has links) (PDF)
Universiẗat, Mag.-Arb.--Münster.
|
Page generated in 0.0416 seconds