51 |
Inventory valuation : Difficulties in manufacturing companies; what & why?Friberg, Lina, Nilsson, Sofia, Wärnbring, Sofia January 2006 (has links)
<p>Master Thesis, School of Management and Economics, Växjö University, Advanced Concepts in Logistics Management, FED370, Spring 2006</p><p>Authors: Lina Friberg, Sofia Nilsson and Sofia Wärnbring</p><p>Tutor: Petra Andersson</p><p>Examiner: Lars-Olof Rask</p><p>Title: Inventory Valuation - difficulties; what & why?</p><p>Background: It is important to value inventories accurately in order to meet shareholder needs and demands for financial information. For manufacturing companies, inventories usually represent approximately 20 to 60 percent of their assets; hence it affects companies’ profits. It is essential in which way assets are valued, however, it will be a waste of time if the record accuracy level is poor.</p><p>Research Questions: Why do companies experience problems when valuing inventories? In order to answer this research question, the following question also has to be answered: What problems can be identified?</p><p>Purpose: The purpose of this master thesis is to describe and explain difficulties when valuing inventories.</p><p>Limitations: We are not considering work-in-process and finished goods inventories, only raw material inventory. Neither are we looking at the companies’ internal calculation system, as we believe this will not be relevant for raw material.</p><p>Method: We chose a positivistic view since we were studying our problem from a process perspective. A case study approach was suitable for us as our thesis was written in the form of a project, and we combined our empirical and theoretical data through the deductive approach.</p><p>Conclusions: The problem of inventory valuation does not exist in the pricing aspect. Most problems are connected to quantity. Especially the daily routines were found to be insufficient, thus creating inaccuracies between the physical quantity in inventory and the quantity displayed in the system. Company B, the larger company, was found not to have as many problems as the smaller Company A has.</p><p>Continued research: We believe an overall picture regarding valuation is needed, including the work-in-process and the finished goods inventories. Moreover, deficiencies are often not just found in the processes, but also the humans involved, and how they are motivated to secure accuracy. In addition, an implementation of cycle counting could be interesting to investigate.</p>
|
52 |
A Predictive Model for Multi-Band Optical Tracking System (MBOTS) PerformanceHorii, M. Michael 10 1900 (has links)
ITC/USA 2013 Conference Proceedings / The Forty-Ninth Annual International Telemetering Conference and Technical Exhibition / October 21-24, 2013 / Bally's Hotel & Convention Center, Las Vegas, NV / In the wake of sequestration, Test and Evaluation (T&E) groups across the U.S. are quickly learning to make do with less. For Department of Defense ranges and test facility bases in particular, the timing of sequestration could not be worse. Aging optical tracking systems are in dire need of replacement. What's more, the increasingly challenging missions of today require advanced technology, flexibility, and agility to support an ever-widening spectrum of scenarios, including short-range (0 − 5 km) imaging of launch events, long-range (50 km+) imaging of debris fields, directed energy testing, high-speed tracking, and look-down coverage of ground test scenarios, to name just a few. There is a pressing need for optical tracking systems that can be operated on a limited budget with minimal resources, staff, and maintenance, while simultaneously increasing throughput and data quality. Here we present a mathematical error model to predict system performance. We compare model predictions to site-acceptance test results collected from a pair of multi-band optical tracking systems (MBOTS) fielded at White Sands Missile Range. A radar serves as a point of reference to gauge system results. The calibration data and the triangulation solutions obtained during testing provide a characterization of system performance. The results suggest that the optical tracking system error model adequately predicts system performance, thereby supporting pre-mission analysis and conserving scarce resources for innovation and development of robust solutions. Along the way, we illustrate some methods of time-space-position information (TSPI) data analysis, define metrics for assessing system accuracy, and enumerate error sources impacting measurements. We conclude by describing technical challenges ahead and identifying a path forward.
|
53 |
Implementing a New Dense Symmetric Eigensolver on Multicore SystemsSukkari, Dalal E. 07 1900 (has links)
We present original advanced architecture implementations of the QDWHeig algo- rithm for solving dense symmetric eigenproblems. The algorithm (Y. Nakatsukasa and N. J. Higham, 2012) performs a spectral divide-and-conquer, which recursively divides the matrix into smaller submatrices by finding an invariant subspace for a subset of the spectrum. The main contribution of this thesis is to enhance the per- formance of QDWHeig algorithm by relying on a high performance kernels from PLASMA [1] and LAPACK [2]. We demonstrate the quality of the eigenpairs that are computed with the QDWHeig algorithm for many matrix types with different eigenvalue clustering. We then implement QDWHeig using kernels from LAPACK and PLASMA, and compare its performance against other divide-and-conquer sym- metric eigensolvers. The main part of QDWHeig is finding a polar decomposition. We introduce mixed precision to enhance the performance in finding the polar decom- position. Our evaluation considers speed and accuracy of the computed eigenvalues. Some applications require finding only a subspectrum of the eigenvalues; therefore we modify the algorithm to find the eigenpairs in a given interval of interest. An ex- perimental study shows significant improvement on the performance of our algorithm using mixed precision and PLASMA routines.
|
54 |
Interaction, Meaning-Making, and Accuracy in Synchronous CMC Discussion: The Experiences of a University-Level Intermediate French ClassJurkowitz, Lisa Amy January 2008 (has links)
A primary goal of foreign language instruction today is to increase opportunities for authentic communication among students. One way to accomplish this is through synchronous computer-mediated classroom discussion (CMCD). While this electronic medium is highly interactive and beneficial for second language acquisition (SLA) on many levels, studies have noted that learner output in CMCD is often inaccurate. In order to heighten students' attention to features of the target language (TL), SLA research suggests integrating a focus on form (FonF) within meaning-based activities. In the CMCD literature, however, FonF has not been widely treated. The current study addresses this gap by documenting the linguistic and interactional features present in intermediate, university-level French students' synchronous discussions. Furthermore, students' perceptions of their general experience with CMCD are qualitatively examined.In this study, students participated in CMCD once a week, for 16 weeks. Discussion prompts encouraged them to use their French meaningfully to communicate with each other while paying attention to accuracy. To make form salient, students set pre-chat language goals; their transcripts were graded on both content and accuracy; they received whole-class and personalized feedback on their transcripts; and they corrected a percentage of their errors. Results show that balancing the concurrent pressures of form and content was challenging for the students. Likely determined by their proficiency level as well as the medium of CMCD itself, students produced mainly short and simple messages in the present tense; used an average range of vocabulary; and wrote with variable grammatical accuracy. As for being accountable for their language usage, students responded very well. Most importantly, focusing on form was not found to be incompatible with students' ability to engage in rich, meaningful, and enjoyable communication. While focusing on accuracy, students shared their opinions and aspects of their personal lives while remaining in the TL. Moreover, they used French for a range of social, strategic and interactional functions. Students also reported the overall experience as highly motivating and rewarding. These findings point to CMCD as a valuable means of increasing authentic classroom communication and indicate that attention to form need not be sacrificed in the process.
|
55 |
Analysis of Functional MRI for Presurgical Mapping: Reproducibility, Automated Thresholds, and Diagnostic AccuracyStevens, Tynan 27 August 2010 (has links)
Examination of functional brain anatomy is a crucial step in the process of surgical removal of many brain tumors. Functional magnetic resonance imaging (fMRI) is a promising technology capable of mapping brain function non-invasively. To be successfully applied to presurgical mapping, there are questions of diagnostic accuracy that remain to be addressed.
One of the greatest difficulties in implementing fMRI is the need to define an activation threshold for producing functional maps. There is as of yet no consensus on the best approach to this problem, and a priori statistical approaches are generally considered insufficient because they are not specific to individual patient data. Additionally, low signal to noise and sensitivity to magnetic susceptibility effects combine to make the production of activation maps technically demanding. This contributes to a wide range of estimates of reproducibility and validity for fMRI, as the results are sensitive to changes in acquisition and processing strategies.
Test-retest fMRI imaging at the individual level, and receiver operator characteristic (ROC) analysis of the results can address both of these concerns simultaneously. In this work, it is shown that the area under the ROC curve (AUC) can be used as an indicator of reproducibility, and that this is dependent on the image thresholds used. Production of AUC profiles can thus be used to optimize the selection of individual thresholds on the basis of detecting stable activation patterns, rather than a priori significance levels.
The ROC analysis framework developed provides a powerful tool for simultaneous control of protocol reproducibility and data driven threshold selection, at the individual level. This tool can be used to guide optimal acquisition and processing strategies, and as part of a quality assurance program for implementing presurgical fMRI.
|
56 |
The effects of attentional focus on performance, neurophysiological activity and kinematics in a golf putting taskPelleck, Valerie 09 January 2015 (has links)
Impaired performance while executing a motor task is attributed to a disruption of normal automatic processes when an internal focus of attention is used (Wulf, McNevin, & Shea, 2001). When an external focus of attention is adopted, automaticity is not constrained and improved performance is noted. What remains unclear is whether the specificity of internally focused task instructions may impact task performance. In the present study, behavioural, kinematic and neurophysiological outcome measures assessed the implications of changing attentional focus for novice and skilled performers in a golf putting task. Findings provided evidence that when novice participants used an internal focus of attention related to task execution, accuracy, kinematics of the putter, and variability of EMG activity in the upper extremity were all adversely affected as task difficulty increased. Instructions which were internal but anatomically distal to the primary movement during the task appeared to have an effect similar to an external focus of attention and did not adversely affect outcomes.
|
57 |
Drawing Accuracy, Quality and ExpertiseCarson, Linda Christine January 2012 (has links)
Drawing from a still-life is a complex visuomotor task. Nevertheless, experts depict three-dimensional subjects convincingly with two-dimensional images. Drawing research has previously been limited by its general dependence on qualitative assessment of drawings by human critics and on retrospective self-report of expertise by drawers. Accuracy measures cannot hope to encompass all the properties of “goodness” in a drawing but this thesis will show that they are consistent with the expertise of the drawers and with the quality ratings of human critics, they are robust enough to support analysis of ecologically valid drawing tasks from complex three-dimensional stimuli, and they are sensitive enough to study global and local properties of drawings.
Drawing expertise may depend to some extent on more accurate internal models of 3D space. To explore this possibility we had adults with a range of drawing experience draw a still life. We measured the angles at intersecting edges in the drawings to calculate each person's mean percentage magnitude error across angles in the still life. This gave a continuous objective measure of drawing accuracy which correlated well with years of art experience. Participants also made perceptual judgements of still lifes, both from direct observation and from an imagined side view. A conventional mental rotation task failed to differentiate drawing expertise. However, those who drew angles more accurately were also significantly better judges of slant, i.e., the pitch of edges in the still life. Those with the most drawing experience were significantly better judges of spatial extent, i.e., which landmarks were leftmost, rightmost, nearest, farthest etc.. The ability to visualize in three dimensions the orientation and relationships of components of a still life is related to drawing accuracy and expertise.
In our second study, we set out to extend our understanding of drawing accuracy and to
develop measures that would support more complex research questions about both drawing and visual perception. We developed and applied novel objective geometric measures of accuracy to analyze a perspective drawing task. We measured the deformation of shapes in drawings relative to the ground truth of a reference photograph and validated these measures by showing that they discriminate appropriately between experts and novices. On all measures—orientation, proportionality, scale and position of shapes—experts outperformed novices. However, error is not uniform across the image. Participants were better at capturing the proportions and positions of objects (the “positive space”) than of the spaces between those objects (the “negative space”) and worse at orienting those objects than shapes in the negative space, but scale error did not differ significantly between positive and negative space. We have demonstrated that objective geometric measures of drawing accuracy are consistent with expertise and that they can be applied to new levels of analysis, not merely to support the conventional wisdom of art educators but to develop new, evidence-based means of training this fundamental skill.
Most or all prior research into drawing was based on human ratings of drawing quality, but we cannot take for granted that the “goodness” of a drawing is related to its accuracy. In order to determine whether our objective measures of accuracy are consistent with drawing quality, we invited more than one hundred participants to grade the quality of all of the drawings we had collected and measured. We showed participants photographs of the still lifes on which the drawings were based and asked them to grade the quality of each drawing on a scale from 1 (“Poor”) to 10 (“Excellent”). People's quality ratings were consistent with one another. People without drawing experience rated drawings slightly more highly than the drawing experts did, but the ratings of both groups correlated well. As we predicted, the more drawing experience the artist had, the more highly rated the drawing was, and the more accurate the drawing was, the more highly rated it was. Furthermore, scaling error (but not proportionality, orientation or position) also predicted drawing quality. In perspective drawing, accuracy—as measured by angle error or polygon error—is related to drawing quality.
If drawing practice strengthens an artist's model of 3D space, we would expect the three-dimensionality of drawings to be disrupted by damage to the dorsal stream or the connection between the dorsal and ventral streams. A former illustrator and animator, DM, who had suffered a right hemisphere stroke and presented with spatial neglect, performed modified versions of the angle judgement, spatial judgement and indirect drawing tasks of our second study. Despite his previous experience, he showed weaknesses in his mental model of 3D space, weaknesses that were not evident in his drawings before the stroke.
Taken together, the thesis has developed and validated two objective measures of drawing accuracy that both capture expert/novice differences well and provide superior measures when contrasted with self-reported expertise. The performance of a single patient with neglect highlights the potential involvement of the dorsal stream in drawing. The novel quantitative measures developed here allow for testable hypotheses concerning the cognitive and neural mechanisms that support the complex skill of drawing to be objectively measured.
|
58 |
Summation-by-Parts Operators for High Order Finite Difference MethodsMattsson, Ken January 2003 (has links)
High order accurate finite difference methods for hyperbolic and parabolic initial boundary value problems (IBVPs) are considered. Particular focus is on time dependent wave propagating problems in complex domains. Typical applications are acoustic and electromagnetic wave propagation and fluid dynamics. To solve such problems efficiently a strictly stable, high order accurate method is required. Our recipe to obtain such schemes is to: i) Approximate the (first and second) derivatives of the IBVPs with central finite difference operators, that satisfy a summation by parts (SBP) formula. ii) Use specific procedures for implementation of boundary conditions, that preserve the SBP property. iii) Add artificial dissipation. iv) Employ a multi block structure. Stable schemes for weakly nonlinear IBVPs require artificial dissipation to absorb the energy of the unresolved modes. This led to the construction of accurate and efficient artificial dissipation operators of SBP type, that preserve the energy and error estimate of the original problem. To solve problems on complex geometries, the computational domain is broken up into a number of smooth and structured meshes, in a multi block fashion. A stable and high order accurate approximation is obtained by discretizing each subdomain using SBP operators and using the Simultaneous Approximation Term (SAT) procedure for both the (external) boundary and the (internal) interface conditions. Steady and transient aerodynamic calculations around an airfoil were performed, where the first derivative SBP operators and the new artificial dissipation operators were combined to construct high order accurate upwind schemes. The computations showed that for time dependent problems and fine structures, high order methods are necessary to accurately compute the solution, on reasonably fine grids. The construction of high order accurate SBP operators for the second derivative is one of the considerations in this thesis. It was shown that the second derivative operators could be closed with two order less accuracy at the boundaries and still yield design order of accuracy, if an energy estimate could be obtained.
|
59 |
Accurate and robust algorithms for microarray data classificationHu, Hong January 2008 (has links)
[Abstract]Microarray data classification is used primarily to predict unseen data using a model built on categorized existing Microarray data. One of the major challenges is that Microarray data contains a large number of genes with a small number of samples. This high dimensionality problem has prevented many existing classification methods from directly dealing with this type of data. Moreover, the small number of samples increases the overfitting problem of Classification, as a result leading to lower accuracy classification performance. Another major challenge is that of the uncertainty of Microarraydata quality. Microarray data contains various levels of noise and quite often high levels of noise, and these data lead to unreliable and low accuracy analysis as well as the high dimensionality problem. Most current classification methods are not robust enough to handle these type of data properly.In our research, accuracy and noise resistance or robustness issues are focused on. Our approach is to design a robust classification method for Microarray data classification.An algorithm, called diversified multiple decision trees (DMDT) is proposed, which makes use of a set of unique trees in the decision committee. The DMDT method has increased the diversity of ensemble committees andtherefore the accuracy performance has been enhanced by avoiding overlapping genes among alternative trees.Some strategies to eliminate noisy data have been looked at. Our method ensures no overlapping genes among alternative trees in an ensemble committee, so a noise gene included in the ensemble committee can affect onetree only; other trees in the committee are not affected at all. This design increases the robustness of Microarray classification in terms of resistance to noise data, and therefore reduces the instability caused by overlapping genes in current ensemble methods.The effectiveness of gene selection methods for improving the performance of Microarray classification methods are also discussed.We conclude that the proposed method DMDT substantially outperforms the other well-known ensemble methods, such as Bagging, Boosting and Random Forests, in terms of accuracy and robustness performance. DMDT is more tolerant to noise than Cascading-and-Sharing trees (CS4), particularywith increasing levels of noise in the data. The results also indicate that some classification methods are insensitive to gene selection while some methodsdepend on particular gene selection methods to improve their performance of classification.
|
60 |
Analysis of the positional accuracy of linear features.Lawford, Geoffrey John Unknown Date (has links) (PDF)
Although the positional accuracy of spatial data has long been of fundamental importance in GIS, it is still largely unknown for linear features. This is compromising the ability of GIS practitioners to undertake accurate geographic analysis and hindering GIS in fulfilling its potential as a credible and reliable tool. As early as 1987 the US National Center for Geographic Information and Analysis identified accuracy as one of the key elements of successful GIS implementation. Yet two decades later, while there is a large body of geodetic literature addressing the positional accuracy of point features, there is little research addressing the positional accuracy of linear features, and still no accepted accuracy model for linear features. It has not helped that national map and data accuracy standards continue to define accuracy only in terms of “well-defined points”. This research aims to address these shortcomings by exploring the effect on linear feature positional accuracy of feature type, complexity, segment length, vertex proximity and e-scale, that is, the scale of the paper map from which the data were originally captured or to which they are customised for output. / The research begins with a review of the development of map and data accuracy standards, and a review of existing research into the positional accuracy of linear features. A geographically sensible error model for linear features using point matching is then developed and a case study undertaken. Features of five types, at five e-scales, are selected from commonly used, well-regarded Australian topographic datasets, and tailored for use in the case study. Wavelet techniques are used to classify the case study features into sections based on their complexity. Then, using the error model, half a million offsets and summary statistics are generated that shed light on the relationships between positional accuracy and e-scale, feature type, complexity, segment length, and vertex proximity. Finally, auto-regressive time series modelling and moving block bootstrap analysis are used to correct the summary statistics for correlation. / The main findings are as follows. First, metadata for the tested datasets significantly underestimates the positional accuracy of the data. Second, positional accuracy varies with e-scale but not, as might be expected, in a linear fashion. Third, positional accuracy varies with feature type, but not as the rules of generalisation suggest. Fourth, complex features lose accuracy faster than less complex features as e-scale is reduced. Fifth, the more complex a real-world feature, the worse its positional accuracy when mapped. Finally, accuracy mid-segment is greater than accuracy end-segment.
|
Page generated in 0.0335 seconds