1 |
Expert vs. Novice: Problem Decomposition/Recomposition in Engineering DesignTing, Song 01 May 2014 (has links)
The purpose of this research was to investigate the differences of using problem decomposition and problem recomposition among dyads of engineering experts, dyads of engineering seniors, and dyads of engineering freshmen. Fifty participants took part in this study. Ten were engineering design experts, 20 were engineering seniors, and 20 were engineering freshmen. Participants worked in dyads to complete an engineering design challenge within an hour. The entire design process was video and audio recorded. After the design session, members participated in a group interview.
This study used protocol analysis as the methodology. Video and audio data were transcribed, segmented, and coded. Two coding systems including the FBS ontology and “levels of the problem” were used in this study. A series of statistical techniques were used to analyze data. Interview data and participants’ design sketches also worked as supplemental data to help answer the research questions.
By analyzing the quantitative and qualitative data, it was found that students used less problem decomposition and problem recomposoition than engineer experts in engineering design. This result implies that engineering education should place more importance on teaching problem decomposition and problem recomposition. Students were found to spend less cognitive effort when considering the problem as a whole and interactions between subsystems than engineer experts. In addition, students were also found to spend more cognitive effort when considering details of subsystems. These results showed that students tended to use dept-first decomposition and experts tended to use breadth-first decomposition in engineering design. The use of Function (F), Behavior (B), and Structure (S) among engineering experts, engineering seniors, and engineering freshmen was compared on three levels. Level 1 represents designers consider the problem as an integral whole, Level 2 represents designers consider interactions between subsystems, and Level 3 represents designers consider details of subsystems. The results showed that students used more S on Level 1 and 3 but they used less F on Level 1 than engineering experts. The results imply that engineering curriculum should improve the teaching of problem definition in engineering design because students need to understand the problem before solving it.
|
2 |
A Symbiotic Bid-Based Framework for Problem Decomposition using Genetic ProgrammingLichodzijewski, Peter 22 February 2011 (has links)
This thesis investigates the use of symbiosis as an evolutionary
metaphor for problem decomposition using Genetic Programming. It
begins by drawing a connection between lateral problem decomposition,
in which peers with similar capabilities coordinate their actions, and
vertical problem decomposition, whereby solution subcomponents are
organized into increasingly complex units of
organization. Furthermore, the two types of problem decomposition are
associated respectively with context learning and layered
learning. The thesis then proposes the Symbiotic Bid-Based framework
modeled after a three-staged process of symbiosis abstracted from
biological evolution. As such, it is argued, the approach has the
capacity for both types of problem decomposition.
Three principles capture the essence of the proposed framework. First,
a bid-based approach to context learning is used to separate the
issues of `what to do' and `when to do it'. Whereas the former issue
refers to the problem-specific actions, e.g., class label
predictions, the latter refers to a bidding behaviour that identifies a
set of problem conditions. In this work, Genetic Programming is used
to evolve the bids casting the method in a non-traditional role as
programs no longer represent complete solutions. Second, the proposed
framework relies on symbiosis as the primary mechanism of inheritance
driving evolution, where this is in contrast to the crossover operator
often encountered in Evolutionary Computation. Under this evolutionary
metaphor, a set of symbionts, each representing a solution
subcomponent in terms of a bid-action pair, is compartmentalized
inside a host. Communication between symbionts is realized through
their collective bidding behaviour, thus, their cooperation is
directly supported by the bid-based approach to context
learning. Third, assuming that challenging tasks where problem
decomposition is likely to play a key role will often involve large
state spaces, the proposed framework includes a dynamic evaluation
function that explicitly models the interaction between candidate
solutions and training cases. As such, the computational overhead
incurred during training under the proposed framework does not depend
on the size of the problem state space.
An approach to model building, the Symbiotic Bid-Based framework is
first evaluated on a set of real-world classification problems which
include problems with multi-class labels, unbalanced distributions,
and large attribute counts. The evaluation includes a comparison
against Support Vector Machines and AdaBoost. Under temporal sequence
learning, the proposed framework is evaluated on the truck reversal
and Rubik's Cube tasks, and in the former case, it is compared with
the Neuroevolution of Augmenting Topologies algorithm. Under both
problems, it is demonstrated that the increased capacity for
problem decomposition under the proposed approach results in improved
performance, with solutions employing vertical problem decomposition
under temporal sequence learning proving to be especially effective.
|
3 |
Možnosti rozvoje algoritmického myšlení s využitím mobilních dotykových zařízení / Possibilities of developing algorithmic thinking with mobile devicesKozub, Martin January 2019 (has links)
This thesis explores algorithmic thinking development on a high school student level. In the introduction algorithmic thinking as well as current practices of teaching algorithmic thinking are examined. A closer look on programming languages, programming methods and tools which can be utilised to develop algorithmic thinking (such as robotic kits, personal computers and mobile devices with touch screens) follows. After analyzing general properties of these tools one mobile device is selected and its programming possibilities are then explored in the empirical part of this paper. As part of the selected proactive action research a number of activities as well as overall process approach is first designed following by their practical verification in a high school. Final results are summarised as suggestions for improvements for further teaching.
|
4 |
Cooperative coevolutionary mixture of experts : a neuro ensemble approach for automatic decomposition of classification problemsNguyen, Minh Ha, Information Technology & Electrical Engineering, Australian Defence Force Academy, UNSW January 2006 (has links)
Artificial neural networks have been widely used for machine learning and optimization. A neuro ensemble is a collection of neural networks that works cooperatively on a problem. In the literature, it has been shown that by combining several neural networks, the generalization of the overall system could be enhanced over the separate generalization ability of the individuals. Evolutionary computation can be used to search for a suitable architecture and weights for neural networks. When evolutionary computation is used to evolve a neuro ensemble, it is usually known as evolutionary neuro ensemble. In most real-world problems, we either know little about these problems or the problems are too complex to have a clear vision on how to decompose them by hand. Thus, it is usually desirable to have a method to automatically decompose a complex problem into a set of overlapping or non-overlapping sub-problems and assign one or more specialists (i.e. experts, learning machines) to each of these sub-problems. An important feature of neuro ensemble is automatic problem decomposition. Some neuro ensemble methods are able to generate networks, where each individual network is specialized on a unique sub-task such as mapping a subspace of the feature space. In real world problems, this is usually an important feature for a number of reasons including: (1) it provides an understanding of the decomposition nature of a problem; (2) if a problem changes, one can replace the network associated with the sub-space where the change occurs without affecting the overall ensemble; (3) if one network fails, the rest of the ensemble can still function in their sub-spaces; (4) if one learn the structure of one problem, it can potentially be transferred to other similar problems. In this thesis, I focus on classification problems and present a systematic study of a novel evolutionary neuro ensemble approach which I call cooperative coevolutionary mixture of experts (CCME). Cooperative coevolution (CC) is a branch of evolutionary computation where individuals in different populations cooperate to solve a problem and their fitness function is calculated based on their reciprocal interaction. The mixture of expert model (ME) is a neuro ensemble approach which can generate networks that are specialized on different sub-spaces in the feature space. By combining CC and ME, I have a powerful framework whereby it is able to automatically form the experts and train each of them. I show that the CCME method produces competitive results in terms of generalization ability without increasing the computational cost when compared to traditional training approaches. I also propose two different mechanisms for visualizing the resultant decomposition in high-dimensional feature spaces. The first mechanism is a simple one where data are grouped based on the specialization of each expert and a color-map of the data records is visualized. The second mechanism relies on principal component analysis to project the feature space onto lower dimensions, whereby decision boundaries generated by each expert are visualized through convex approximations. I also investigate the regularization effect of learning by forgetting on the proposed CCME. I show that learning by forgetting helps CCME to generate neuro ensembles of low structural complexity while maintaining their generalization abilities. Overall, the thesis presents an evolutionary neuro ensemble method whereby (1) the generated ensemble generalizes well; (2) it is able to automatically decompose the classification problem; and (3) it generates networks with small architectures.
|
5 |
Preliminary design of spacecraft trajectories for missions to outer planets and small bodiesLantukh, Demyan Vasilyevich 17 September 2015 (has links)
Multiple gravity assist (MGA) spacecraft trajectories can be difficult to find, an intractable problem to solve completely. However, these trajectories have enormous benefits for missions to challenging destinations such as outer planets and primitive bodies. Techniques are presented to aid in solving this problem with a global search tool and additional investigation into one particular proximity operations option is discussed. Explore is a global grid-search MGA trajectory pathsolving tool. An efficient sequential tree search eliminates v∞ discontinuities and prunes trajectories. Performance indices may be applied to further prune the search, with multiple objectives handled by allowing these indices to change between trajectory segments and by pruning with a Pareto-optimality ranking. The MGA search is extended to include deep space maneuvers (DSM), v∞ leveraging transfers (VILT) and low-thrust (LT) transfers. In addition, rendezvous or nπ sequences can patch the transfers together, enabling automatic augmentation of the MGA sequence. Details of VILT segments and nπ sequences are presented: A boundaryvalue problem (BVP) VILT formulation using a one-dimensional root-solve enables inclusion of an efficient class of maneuvers with runtime comparable to solving ballistic transfers. Importantly, the BVP VILT also allows the calculation of velocity-aligned apsidal maneuvers (VAM), including inter-body transfers and orbit insertion maneuvers. A method for automated inclusion of nπ transfers such as resonant returns and back-flip trajectories is introduced: a BVP is posed on the v∞ sphere and solved with one or more nπ transfers – which may additionally fulfill specified science objectives. The nπ sequence BVP is implemented within the broader search, combining nπ and other transfers in the same trajectory. To aid proximity operations around small bodies, analytical methods are used to investigate stability regions in the presence of significant solar radiation pressure (SRP) and body oblateness perturbations. The interactions of these perturbations allow for heliotropic orbits, a stable family of low-altitude orbits investigated in detail. A novel constrained double-averaging technique analytically determines inclined heliotropic orbits. This type of knowledge is uniquely valuable for small body missions where SRP and irregular body shape are very important and where target selection is often a part of the mission design.
|
Page generated in 0.129 seconds