• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 349
  • 78
  • 60
  • 56
  • 49
  • 42
  • 16
  • 11
  • 9
  • 8
  • 7
  • 6
  • 6
  • 4
  • 3
  • Tagged with
  • 842
  • 112
  • 111
  • 89
  • 80
  • 74
  • 66
  • 64
  • 62
  • 56
  • 55
  • 54
  • 53
  • 52
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
681

Integrated Airline Operations: Schedule Design, Fleet Assignment, Aircraft Routing, and Crew Scheduling

Bae, Ki-Hwan 05 January 2011 (has links)
Air transportation offers both passenger and freight services that are essential for economic growth and development. In a highly competitive environment, airline companies have to control their operating costs by managing their flights, aircraft, and crews effectively. This motivates the extensive use of analytical techniques to solve complex problems related to airline operations planning, which includes schedule design, fleet assignment, aircraft routing, and crew scheduling. The initial problem addressed by airlines is that of schedule design, whereby a set of flights having specific origin and destination cities as well as departure and arrival times is determined. Then, a fleet assignment problem is solved to assign an aircraft type to each flight so as to maximize anticipated profits. This enables a decomposition of subsequent problems according to the different aircraft types belonging to a common family, for each of which an aircraft routing problem and a crew scheduling or pairing problem are solved. Here, in the aircraft routing problem, a flight sequence or route is built for each individual aircraft so as to cover each flight exactly once at a minimum cost while satisfying maintenance requirements. Finally, in the crew scheduling or pairing optimization problem, a minimum cost set of crew rotations or pairings is constructed such that every flight is assigned a qualified crew and that work rules and collective agreements are satisfied. In practice, most airline companies solve these problems in a sequential manner to plan their operations, although recently, an increasing effort is being made to develop novel approaches for integrating some of the airline operations planning problems while retaining tractability. This dissertation formulates and analyzes three different models, each of which examines a composition of certain pertinent airline operational planning problems. A comprehensive fourth model is also proposed, but is relegated for future research. In the first model, we integrate fleet assignment and schedule design by simultaneously considering optional flight legs to select along with the assignment of aircraft types to all scheduled legs. In addition, we consider itinerary-based demands pertaining to multiple fare-classes. A polyhedral analysis of the proposed mixed-integer programming model is used to derive several classes of valid inequalities for tightening its representation. Solution approaches are developed by applying Benders decomposition method to the resulting lifted model, and computational experiments are conducted using real data obtained from a major U.S. airline (United Airlines) to demonstrate the efficacy of the proposed procedures as well as the benefits of integration. A comparison of the experimental results obtained for the basic integrated model and for its different enhanced representations reveals that the best modeling strategy among those tested is the one that utilizes a variety of five types of valid inequalities for moderately sized problems, and further implements a Benders decomposition approach for relatively larger problems. In addition, when a heuristic sequential fixing step is incorporated within the algorithm for even larger sized problems, the computational results demonstrate a less than 2% deterioration in solution quality, while reducing the effort by about 21%. We also performed an experiment to assess the impact of integration by comparing the proposed integrated model with a sequential implementation in which the schedule design is implemented separately before the fleet assignment stage based on two alternative profit maximizing submodels. The results obtained demonstrate a clear advantage of utilizing the integrated model, yielding an 11.4% and 5.5% increase in profits in comparison with using the latter two sequential models, which translates to an increase in annual profits by about $28.3 million and $13.7 million, respectively. The second proposed model augments the first model with additional features such as flexible flight times (i.e., departure time-windows), schedule balance, and demand recapture considerations. Optional flight legs are incorporated to facilitate the construction of a profitable schedule by optimally selecting among such alternatives in concert with assigning the available aircraft fleet to all the scheduled legs. Moreover, network effects and realistic demand patterns are effectively represented by examining itinerary-based demands as well as multiple fare-classes. Allowing flexibility on the departure times of scheduled flight legs within the framework of an integrated model increases connection opportunities for passengers, hence yielding robust schedules while saving fleet assignment costs. A provision is also made for airlines to capture an adequate market share by balancing flight schedules throughout the day. Furthermore, demand recapture considerations are modeled to more realistically represent revenue realizations. For this proposed mixed-integer programming model, which integrates the schedule design and fleet assignment processes while considering flexible flight times, schedule balance, and recapture issues, along with optional legs, itinerary-based demands, and multiple fare-classes, we perform a polyhedral analysis and utilize the Reformulation-Linearization Technique in concert with suitable separation routines to generate valid inequalities for tightening the model representation. Effective solution approaches are designed by applying Benders decomposition method to the resulting tightened model, and computational results are presented to demonstrate the efficacy of the proposed procedures. Using real data obtained from United Airlines, when flight times were permitted to shift by up to 10 minutes, the estimated increase in profits was about $14.9M/year over the baseline case where only original flight legs were used. Also, the computational results indicated a 1.52% and 0.49% increase in profits, respectively, over the baseline case, while considering two levels of schedule balance restrictions, which can evidently also enhance market shares. In addition, we measured the effect of recaptured demand with respect to the parameter that penalizes switches in itineraries. Using values of the parameter that reflect 1, 50, 100, or 200 dollars per switched passenger, this yielded increases in recaptured demand that induced additional profits of 2.10%, 2.09%, 2.02%, and 1.92%, respectively, over the baseline case. Overall, the results obtained from the two schedule balance variants of the proposed integrated model that accommodate all the features of flight retiming, schedule balance, and demand recapture simultaneously, demonstrated a clear advantage by way of $35.1 and $31.8 million increases in annual profits, respectively, over the baseline case in which none of these additional features is considered. In the third model, we integrate the schedule design, fleet assignment, and aircraft maintenance routing decisions, while considering optional legs, itinerary-based demands, flexible flight retimings, recapture, and multiple fare-classes. Instead of utilizing the traditional time-space network (TSN), we formulate this model based on a flight network (FN) that provides greater flexibility in accommodating integrated operational considerations. In order to consider through-flights (i.e., a sequence of flight legs served by the same aircraft), we append a set of constraints that matches aircraft assignments on certain inbound legs into a station with that on appropriate outbound legs at the same station. Through-flights can generate greater revenue because passengers are willing to pay a premium for not having to change aircraft on connecting flights, thereby reducing the possibility of delays and missed baggage. In order to tighten the model representation and reduce its complexity, we apply the Reformulation-Linearization Technique (RLT) and also generate other classes of valid inequalities. In addition, since the model possesses many equivalent feasible solutions that can be obtained by simply reindexing the aircraft of the same type that depart from the same station, we introduce a set of suitable hierarchical symmetry-breaking constraints to enhance the model solvability by distinguishing among aircraft of the same type. For the resulting large-scale augmented model formulation, we design a Benders decomposition-based solution methodology and present extensive computational results to demonstrate the efficacy of the proposed approach. We explored four different algorithmic variants, among which the best performing procedure (Algorithm A1) adopted two sequential levels of Benders partitioning method. We then applied Algorithm A1 to perform several experiments to study the effects of different modeling features and algorithmic strategies. A summary of the results obtained is as follows. First, the case that accommodated both mandatory and optional through-flight leg pairs in the model based on their relative effects on demands and enhanced revenues achieved the most profitable strategy, with an estimated increase in expected annual profits of $2.4 million over the baseline case. Second, utilizing symmetry-breaking constraints in concert with compatible objective perturbation terms greatly enhanced problem solvability and thus promoted the detection of improved solutions, resulting in a $5.8 million increase in estimated annual profits over the baseline case. Third, in the experiment that considers recapture of spilled demand from primary itineraries to other compatible itineraries, the different penalty parameter values (100, 50, and 1 dollars per re-routed passenger) induced average respective proportions of 3.2%, 3.4%, and 3.7% in recaptured demand, resulting in additional estimated annual profits of $3.7 million, $3.8 million, and $4.0 million over the baseline case. Finally, incorporating the proposed valid inequalities within the model to tighten its representation helped reduce the computational effort by 11% on average, while achieving better solutions that yielded on average an increase in estimated annual profits of $1.4 million. In closing, we propose a fourth more comprehensive model in which the crew scheduling problem is additionally integrated with fleet assignment and aircraft routing. This integration is important for airlines because crew costs are the second largest component of airline operating expenses (after fuel costs), and the assignment and routing of aircraft plus the assignment of crews are two closely interacting components of the planning process. Since crews are qualified to typically serve a single aircraft family that is comprised of aircraft types having a common cockpit configuration and crew rating, the aircraft fleeting and routing decisions significantly impact the ensuing assignment of cockpit crews to flights. Therefore it is worthwhile to investigate new models and solution approaches for the integrated fleeting, aircraft routing, and crew scheduling problem, where all of these important inter-dependent processes are handled simultaneously, and where the model can directly accommodate various work rules such as imposing a specified minimum and maximum number of flying hours for crews on any given pairing, and a minimum number of departures at a given crew base for each fleet group. However, given that the crew scheduling problem itself is highly complex because of the restrictive work rules that must be heeded while constructing viable duties and pairings, the formulated integrated model would require further manipulation and enhancements along with the design of sophisticated algorithms to render it solvable. We therefore recommend this study for future research, and we hope that the modeling, analysis, and algorithmic development and implementation work performed in this dissertation will lend methodological insights into achieving further advances along these lines. / Ph. D.
682

Using DNA markers to trace pedigrees and population substructure and identify associations between major histocompatibility regions and disease resistance in rainbow trout (Oncorhynchus mykiss)

Johnson, Nathan Allen 28 August 2007 (has links)
Examination of variation at polymorphic microsatellite loci is a widely accepted method for determining parentage and examining genetic diversity within rainbow trout (Oncorhynchus mykiss) breeding programs. Genotyping costs are considerable; therefore, we developed a single-step method of co-amplifying twelve microsatellite loci in two hexaplex reactions. The protocol is explicitly described to ensure reproducible results. I applied the protocol to samples previously analyzed at the National Center for Cool and Coldwater Aquaculture (NCCCWA) with previously reported marker sets for a comparison of results. Each marker within the multiplex system was evaluated for duplication, null alleles, physical linkage, and probability of genotyping errors. Data from four of the 12 markers were excluded from parental analysis based on these criteria. Parental assignments were compared to those of a previous study that used five independently amplified microsatellites. Percentages of progeny assigned to parents were higher using the subset of eight markers from the multiplex system than with five markers used in the previous study (98% vs. 92%). Through multiplexing, use of additional markers improved parental allocation while also improving efficiency by reducing the number of PCR reactions and genotyping runs required. I evaluated the methods further through estimation of F-statistics, pairwise genetic distances, and cluster analysis among brood-years at the NCCCWA facility. These estimates were compared to those from nine independently amplified microsatellites used in a previous study. Fst metrics calculated between brood-years showed similar values of genetic differentiation using both marker sets. Estimates of individual pairwise genetic distances were used for constructing neighbor-joining trees. Both marker-sets yielded trees that showed similar subpopulation structuring and agreed with results from a model-based cluster analysis and available pedigree information. These approaches for detecting population substructure and admixture portions within individuals are particularly useful for new breeding programs where the founders' relatedness is unknown. The 2005 NCCCWA brood-year (75 full-sib families) was challenged with Flavobacterium psychrophilum, the causative agent of bacterial coldwater disease (BCWD). The overall mortality rate was 70%, with large variation among families. Resistance to the disease was assessed by monitoring post-challenge days-to-death. Phenotypic variation and additive genetic variation were estimated using mixed models of survival analysis. The microsatellite markers used were previously isolated from BAC clones that harbor genes of interest and mapped onto the rainbow trout genetic linkage map. A general relationship between UBA gene sequence types and MH-IA-linked microsatellite alleles indicated that microsatellites mapped near or within specific major histocompatibility (MH) loci reliably mark sequence variation at MH genes. The parents and grandparents of the 2005 brood-year families were genotyped with markers linked to the four MH genomic regions (MH-IA, MH-IB, TAP1, and MH-II) to assess linkage disequilibrium (LD) between those genomic regions and resistance to BCWD. Family analysis suggested that MH-IB and MH-II markers are linked to BCWD survivability. Tests for disease association at the population level substantiated the involvement of MH-IB with disease resistance. The impact of MH sequence variation on selective breeding for disease resistance is discussed in the context of aquaculture production. / Master of Science
683

”Ta fram trollspöt och fixa det” : En kvalitativ intervjustudie om hur skolkuratorer hanterar utmaningar med det psykosociala uppdraget / ”Take out the magic wand and fix it” : A qualitative interview study on how school counselors handle challenges with the psychosocial assignment

Arkebrant, Anna, Salameh Sonesson, Nancy January 2023 (has links)
The aim of this essay is to study how school counselors describe and cope with the challenges of carrying the psychosocial assignment within school and what they need in order to be able to carry out their assignment. The data was collected through semi-structured interviews with school counselors and analyzed using thematic analysis and coping theory. Based on the school counselors' perspectives, the results demonstrate that the psychosocial assignment needs to include a holistic view of the pupil and that the core of the assignment is to work preventatively. The study identified challenges that school counselors face including high workload, lack of resources and a perspective which clashes with the pedagogical perspective, thus leaving the school counselors lonely in carrying their role. To deal with these challenges, school counselors use different coping strategies such as seeking support from colleagues and through external mentoring. To facilitate for the school counselors to carry out their assignment adequately, the study emphasizes the need for trust, legitimacy and discretion.
684

Scheduling Local and Remote Memory in Cluster Computers

Serrano Gómez, Mónica 02 September 2013 (has links)
Los cl'usters de computadores representan una soluci'on alternativa a los supercomputadores. En este tipo de sistemas, se suele restringir el espacio de direccionamiento de memoria de un procesador dado a la placa madre local. Restringir el sistema de esta manera es mucho m'as barato que usar una implementaci'on de memoria compartida entre las placas. Sin embargo, las diferentes necesidades de memoria de las aplicaciones que se ejecutan en cada placa pueden dar lugar a un desequilibrio en el uso de memoria entre las placas. Esta situaci'on puede desencadenar intercambios de datos con el disco, los cuales degradan notablemente las prestaciones del sistema, a pesar de que pueda haber memoria no utilizada en otras placas. Una soluci'on directa consiste en aumentar la cantidad de memoria disponible en cada placa, pero el coste de esta soluci'on puede ser prohibitivo. Por otra parte, el hardware de acceso a memoria remota (RMA) es una forma de facilitar interconexiones r'apidas entre las placas de un cl'uster de computadores. En trabajos recientes, esta caracter'¿stica se ha usado para aumentar el espacio de direccionamiento en ciertas placas. En este trabajo, la m'aquina base usa esta capacidad como mecanismo r'apido para permitir al sistema operativo local acceder a la memoria DRAM instalada en una placa remota. En este contexto, una plani¿caci'on de memoria e¿ciente constituye una cuesti'on cr'¿tica, ya que las latencias de memoria tienen un impacto importante sobre el tiempo de ejecuci'on global de las aplicaciones, debido a que las latencias de memoria remota pueden ser varios 'ordenes de magnitud m'as altas que los accesos locales. Adem'as, el hecho de cambiar la distribuci'on de memoria es un proceso lento que puede involucrar a varias placas, as'¿ pues, el plani¿cador de memoria ha de asegurarse de que la distribuci'on objetivo proporciona mejores prestaciones que la actual. La presente disertaci'on pretende abordar los asuntos mencionados anteriormente mediante la propuesta de varias pol'¿ticas de plani¿caci'on de memoria. En primer lugar, se presenta un algoritmo ideal y una estrategia heur'¿stica para asignar memoria principal ubicada en las diferentes regiones de memoria. Adicionalmente, se ha dise¿nado un mecanismo de control de Calidad de Servicio para evitar que las prestaciones de las aplicaciones en ejecuci'on se degraden de forma inadmisible. El algoritmo ideal encuentra la distribuci'on de memoria 'optima pero su complejidad computacional es prohibitiva dado un alto n'umero de aplicaciones. De este inconveniente se encarga la estrategia heur'¿stica, la cual se aproxima a la mejor distribuci'on de memoria local y remota con un coste computacional aceptable. Los algoritmos anteriores se basan en pro¿ling. Para tratar este defecto potencial, nos centramos en soluciones anal'¿ticas. Esta disertaci'on propone un modelo anal'¿tico que estima el tiempo de ejecuci'on de una aplicaci'on dada para cierta distribuci'on de memoria. Dicha t'ecnica se usa como un predictor de prestaciones que proporciona la informaci'on de entrada a un plani¿cador de memoria. El plani¿cador de memoria usa las estimaciones para elegir din'amicamente la distribuci'on de memoria objetivo 'optima para cada aplicaci'on que se est'e ejecutando en el sistema, de forma que se alcancen las mejores prestaciones globales. La plani¿caci'on a granularidad m'as alta permite pol'¿ticas de plani¿caci'on m'as simples. Este trabajo estudia la viabilidad de plani¿car a nivel de granularidad de p'agina del sistema operativo. Un entrelazado convencional basado en hardware a nivel de bloque y un entrelazado a nivel de p'agina de sistema operativo se han tomado como esquemas de referencia. De la comparaci'on de ambos esquemas de referencia, hemos concluido que solo algunas aplicaciones se ven afectadas de forma signi¿cativa por el uso del entrelazado a nivel de p'agina. Las razones que causan este impacto en las prestaciones han sido estudiadas y han de¿nido la base para el dise¿no de dos pol'¿ticas de distribuci'on de memoria basadas en sistema operativo. La primera se denomina on-demand (OD), y es una estrategia simple que funciona colocando las p'aginas nuevas en memoria local hasta que dicha regi'on se llena, de manera que se bene¿cia de la premisa de que las p'aginas m'as accedidas se piden y se ubican antes que las menos accedidas para mejorar las prestaciones. Sin embargo, ante la ausencia de dicha premisa para algunos de los benchmarks, OD funciona peor. La segunda pol'¿tica, denominada Most-accessed in-local (Mail), se propone con el objetivo de evitar este problema. / Cluster computers represent a cost-effective alternative solution to supercomputers. In these systems, it is common to constrain the memory address space of a given processor to the local motherboard. Constraining the system in this way is much cheaper than using a full-fledged shared memory implementation among motherboards. However, memory usage among motherboards may be unfairly balanced depending on the memory requirements of the applications running on each motherboard. This situation can lead to disk-swapping, which severely degrades system performance, although there may be unused memory on other motherboards. A straightforward solution is to increase the amount of available memory in each motherboard, but the cost of this solution may become prohibitive. On the other hand, remote memory access (RMA) hardware provides fast interconnects among the motherboards of a cluster computer. In recent works, this characteristic has been used to extend the addressable memory space of selected motherboards. In this work, the baseline machine uses this capability as a fast mechanism to allow the local OS to access to DRAM memory installed in a remote motherboard. In this context, efficient memory scheduling becomes a major concern since main memory latencies have a strong impact on the overall execution time of the applications, provided that remote memory accesses may be several orders of magnitude higher than local accesses. Additionally, changing the memory distribution is a slow process which may involve several motherboards, hence the memory scheduler needs to make sure that the target distribution provides better performance than the current one. This dissertation aims to address the aforementioned issues by proposing several memory scheduling policies. First, an ideal algorithm and a heuristic strategy to assign main memory from the different memory regions are presented. Additionally, a Quality of Service control mechanism has been devised in order to prevent unacceptable performance degradation for the running applications. The ideal algorithm finds the optimal memory distribution but its computational cost is prohibitive for a high number of applications. This drawback is handled by the heuristic strategy, which approximates the best local and remote memory distribution among applications at an acceptable computational cost. The previous algorithms are based on profiling. To deal with this potential shortcoming we focus on analytical solutions. This dissertation proposes an analytical model that estimates the execution time of a given application for a given memory distribution. This technique is used as a performance predictor that provides the input to a memory scheduler. The estimates are used by the memory scheduler to dynamically choose the optimal target memory distribution for each application running in the system in order to achieve the best overall performance. Scheduling at a higher granularity allows simpler scheduler policies. This work studies the feasibility of scheduling at OS page granularity. A conventional hardware-based block interleaving and an OS-based page interleaving have been assumed as the baseline schemes. From the comparison of the two baseline schemes, we have concluded that only the performance of some applications is significantly affected by page-based interleaving. The reasons that cause this impact on performance have been studied and have provided the basis for the design of two OS-based memory allocation policies. The first one, namely on-demand (OD), is a simple strategy that works by placing new pages in local memory until this region is full, thus benefiting from the premise that most of the accessed pages are requested and allocated before than the least accessed ones to improve the performance. Nevertheless, in the absence of this premise for some benchmarks, OD performs worse. The second policy, namely Most-accessed in-local (Mail), is proposed to avoid this problem / Serrano Gómez, M. (2013). Scheduling Local and Remote Memory in Cluster Computers [Tesis doctoral]. Editorial Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/31639
685

A basic probability assignment methodology for unsupervised wireless intrusion detection

Ghafir, Ibrahim, Kyriakopoulos, K.G., Aparicio-Navarro, F.J., Lambotharan, S., Assadhan, B., Binsalleeh, A.H. 24 January 2020 (has links)
Yes / The broadcast nature of wireless local area networks has made them prone to several types of wireless injection attacks, such as Man-in-the-Middle (MitM) at the physical layer, deauthentication, and rogue access point attacks. The implementation of novel intrusion detection systems (IDSs) is fundamental to provide stronger protection against these wireless injection attacks. Since most attacks manifest themselves through different metrics, current IDSs should leverage a cross-layer approach to help toward improving the detection accuracy. The data fusion technique based on the Dempster–Shafer (D-S) theory has been proven to be an efficient technique to implement the cross-layer metric approach. However, the dynamic generation of the basic probability assignment (BPA) values used by D-S is still an open research problem. In this paper, we propose a novel unsupervised methodology to dynamically generate the BPA values, based on both the Gaussian and exponential probability density functions, the categorical probability mass function, and the local reachability density. Then, D-S is used to fuse the BPA values to classify whether the Wi-Fi frame is normal (i.e., non-malicious) or malicious. The proposed methodology provides 100% true positive rate (TPR) and 4.23% false positive rate (FPR) for the MitM attack and 100% TPR and 2.44% FPR for the deauthentication attack, which confirm the efficiency of the dynamic BPA generation methodology. / Gulf Science, Innovation and Knowledge Economy Programme of the U.K. Government under UK-Gulf Institutional Link Grant IL 279339985 and in part by the Engineering and Physical Sciences Research Council (EPSRC), U.K., under Grant EP/R006385/1.
686

Formation Path Planning for Holonomic Quadruped Robots / Vägplanering för formationer av holonomiska fyrbenta robotar

Norén, Magnus January 2024 (has links)
Formation planning and control for multi-agent robotic systems enables tasks to be completed more efficiently and robustly compared to using a single agent. Applications are found in fields such as agriculture, mining, autonomousvehicle platooning, surveillance, space exploration, etc. In this paper, a complete framework for formation path planning for holonomic ground robots in an obstacle-rich environment is proposed. The method utilizes the Fast Marching Square (FM2) path planning algorithm, and a formation keeping approach which falls within the Leader-Follower category. Contrary to most related works, the role of leader is dynamically assigned to avoid unnecessary rotation of the formation. Furthermore, the roles of the followers are also dynamically assigned to fit the current geometry of the formation. A flexible spring-damper system prevents inter-robot collisions and helps maintain the formation shape. An obstacle avoidance step at the end of the pipeline keeps the spring forces from driving robots into obstacles. The framework is tested on a formation consisting of three Unitree Go1 quadruped robots, both in the Gazebo simulation environment and in lab experiments. The results are successful and indicate that the method is feasible, although further work is needed to adjust the role assignment for larger formations, combine the framework with Simultaneous Localization and Mapping (SLAM) and provide a more robust handling of dynamic obstacles.
687

The neurofunctional correlates of sentence processing: focus on difficulties of morphosyntactic processing and thematic role assignment in aphasia

Beber, Sabrina 22 July 2024 (has links)
Left hemisphere damage is a frequent cause of aphasia. Analyses of deviant linguistic behaviors provide valuable information about the functional architecture of language. Correlating specific language difficulties with damage to the brain helps shed light on the relationships between language and the neural substrate. The aim of this Ph.D. thesis is to contribute to the understanding of the neural correlates of sentence comprehension, based on behavioral and neuroimaging evidence from aphasia. A substantial amount of research based on lesion-symptom mapping has been devoted to this issue, but several issues remain to be clarified. To consider just an example, lesion-symptom mapping studies have systematically linked the posterior regions of the left hemisphere to sentence comprehension. Surprisingly, however, the same studies failed to provide similarly strong evidence for prefrontal regions, contradicting the results of previous neuropsychological investigations that clearly supported the critical role of these regions in sentence processing. To date, there are enough controversial issues on sentence processing as to warrant reconsideration of available evidence. The present project focused on the neural correlates of the mechanisms involved in thematic role assignment and in the processing of morphosyntactic features. This is because both sets of mechanisms are critical for sentence interpretation both in comprehension and in production. The first step of the project consisted of a systematic literature review and meta-analysis of lesion-symptom investigations of sentence processing (study 1 – Chapter 1). The literature search yielded 43 studies eligible for review, of which 27 were used in the meta-analysis. The main goal was to identify the correlates of thematic role assignment and of morphosyntactic processing. Thematic role assignment errors correlated mainly with damage in the left temporo-parietal regions, and morphosyntactic errors mainly with damage in the prefrontal regions. However, careful consideration of the reviewed and meta-analyzed studies shows that conclusions are biased under several aspects. Data on thematic deficits are based almost exclusively on sentence comprehension, and data on morphosyntactic deficits on sentence production. Furthermore, even the very few studies that evaluated both impairments did so in distinct linguistic contexts, or in different response modalities. In addition, studies that focused on one set of mechanisms did not consider the possibility that performance on their dimension of interest was influenced by damage to the other. For example, studies focusing on thematic comprehension administered thematic foils, but not morphosyntactic foils. Therefore, the neurofunctional correlates emerging from the meta-analysis and the review may offer a biased and/or partial view. As a first attempt at overcoming these limitations, a lesion study on native speakers of Italian with post-left stroke aphasia was conducted (study 2 – Chapter 2) to clarify the neural substrates of morphosyntactic and thematic processes in comprehension. Experimental stimuli consisted of simple declarative, semantically reversible sentences presented in the active or passive voice. In an auditory sentence comprehension task, participants were asked to match a sentence spoken by the computer to the corresponding picture, that had to be distinguished from a thematic, a morphosyntactic or a lexical-semantic foil. Thirty-three left brain-damaged individuals (out of an initial sample of 70) were selected because they fared normally on lexical-semantic foils, but poorly on morphosyntactic (n=15) and/or thematic (n=18) contrasts. Voxel-based Lesion Symptom Mapping (VLSM) analyses retrieved non-overlapping substrates. Morphosyntactic difficulties were uninfluenced by sentence voice and correlated with left inferior and middle frontal damage, whereas thematic role reversals were more frequent on passives and correlated with damage to the superior and middle temporal gyrus and to the superior occipitolateral cortex. Both correlations persisted after covarying for phonological short-term memory. When response accuracy to passive vs active sentences in the presence of thematic foils was considered, portions of the angular and supramarginal gyrus were retrieved. They could provide the neural substrate for thematic reanalysis, that is critical for comprehending sentences with noncanonical word order. However interesting and strong, these results were obtained by considering just one sentence type (declaratives) and by relying on basic neuroimaging data. To go beyond these limitations, the final step of the project relied on more comprehensive behavioral analyses and more advanced neuroimaging techniques (study 3 – Chapter 3). The SCOPRO (Sentence Comprehension and PROduction) language battery was developed, that focuses on thematic and morphosyntactic processes and allows assessing these processes in a variety of reversible sentences in both comprehension and production. SCOPRO was administered to 50 neurotypical subjects (to assess applicability and establish cutoff levels) and 27 aphasic participants (native Italian speakers with left post-stroke aphasia). Of the latter, 21 were included in an MRI-based lesion-symptom mapping study. Results obtained in comprehension tasks were correlated with neuroimaging data (structural T1 and DWI). Lesion maps, disconnectome maps, tract disconnection probability and personalized deterministic tractography data demonstrated the involvement of grey and white matter. Thematic role reversals correlated to cortical damage in the left angular gyrus. They also correlated to cortical damage in the left supramarginal gyrus when controlling for single-word processing in a voxel-based disconnectome-symptom mapping analysis. Thematic errors were associated also with underlying white matter damage. Correlating the probability of tract disconnections and personalized deterministic tractography with thematic role performance involved the left arcuate fasciculus. The posterior segment was associated with thematic role reversals, even after controlling for morphosyntactic and single-word processing. The anterior segment was linked to accuracy on thematic roles when single-word processing was used as a covariate. The long segment also correlated with the level of thematic role performance, but the correlation was no longer present when morphosyntactic performance was used as a covariate. SCOPRO can be used not only to assess language processes in a broad sense (e.g., morphosyntactic vs thematic), but also to look into more detailed issues. Contrasting accuracy on declarative and comparative sentences is an interesting case in point. Both sentence types express reversible relations, but only declaratives require thematic role mapping. Hence, contrasting results between the two could help distinguish the correlates of role mapping from those of reversibility per se. The supramarginal gyrus was damaged in participants who fared poorly in both declaratives and comparatives but, interestingly, the aphasics with selective thematic difficulties had suffered damage to the posterior division of the middle temporal gyrus and to the angular gyrus, whereas those with selective difficulties on comparatives presented with lesions in the parietal and central opercular cortex. Clearly, these results are preliminary and require further investigation. It is unanimously accepted that sentence processing involves a large-scale network including frontal, temporal and parietal cortices and the underlying white matter pathways. The main contribution of the present project is that it allows articulating more detailed hypotheses on the role played by some components of the network during sentence comprehension. Results tie left frontal regions to morphosyntactic processing, posterior temporal regions to the retrieval of verb argument structure, and a posterior-superior parietal area to thematic reanalysis. Preliminary observations also suggest that different neural substrates could be involved in processing reversibility as such and when more specifically implemented in thematic roles. Further studies exploiting detailed behavioral tools like the SCOPRO battery and sophisticated neuroimaging techniques in larger samples will lead to a better understanding of language functions and their processing in the brain.
688

Computational assessing model based on performance and dynamic assignment of curriculum contents

Mínguez Aroca, Francisco Dimas 14 March 2016 (has links)
[EN] The Bologna process encourages the transition of higher education from knowledge possession to understanding performances and from a teaching-centered to a student centered approach via learning outcomes. A student-centered evaluation means that students analyze actively their own learning with concrete criteria on development levels, in an environment where they obtain immediate, frequently and formative feedback. The rationale of this dissertation consists in introducing the execution of disparate sets of activities into the assessment process in order to enrich the whole procedure keeping it close to the learning process. Continuous assessment seems to be the most accurate mean of executing the assessment process taking into account that competencies are achieved by executing activities. The evaluation process is implemented throughout a discreet number of measurement points called "moments of evaluation" which consist in a set of activities necessary for the development of the process. And based on the existing partial order relationship among specific curricular domains we could draw a directed graph with several chains of topics representing a natural way of progress in order to reach the profile competences. We propose a new procedure in continuous assessment by introducing an active/retroactive model, based on the aforementioned chain(s) of topics, which aims to identify those competences that have and those that have not been adequately achieved. With this in mind we suggest introducing a retroactive impact on the outcome assessment of the concerned competencies evaluated in the corresponding chain(s) of topics. These retroactive impacts might be amplified by the introduction of a grade impact amplifier as continuous assessment procedure based on the greater experience and knowledge of the students as the course advances. In general, any subject is composed by different topics and each topic is developed through the execution, with different relevance, of a number of activities. Relationships between activities, topics and competences can be distributed in a 3D matrix array which we will call ATC cuboid. ATC cuboid uses a binary assessment as a check of an activity in each of the core competencies. In this way, we have a matrix structure of the performance of the student over a course, which is the basis to design individualized curricular strategies with the goal of achieving the required level of development of each competence. We will develop the aforementioned ATC cuboids on a sample of students and a comparison between this method and a more traditional method used with Aerospace Engineering students in the Design Engineering School ETSID at Universitat Politècnica de València (Valencia, Spain). / [ES] El proceso de Bolonia anima la transición de la educación superior desde un modelo basado en la adquisición del conocimiento a un modelo que prima la comprensión del desempeño y desde un modelo centrado en la enseñanza a un modelo centrado en el estudiante a través de los resultados del aprendizaje. Una evaluación centrada en el alumnado significa que el estudiante analiza activamente su propio aprendizaje con criterios concretos sobre niveles de desarrollo en un entorno donde obtiene feedback de forma inmediata, frecuente y formativa. El fundamento de esta tesis consiste en la introducción de conjuntos muy diversos de actividades en el proceso de evaluación con el objetivo de enriquecerlo globalmente y acercarlo al proceso de aprendizaje. La evaluación continua se perfila como uno de los medios más precisos de ejecutar el proceso de evaluación teniendo en cuenta que las competencias pueden adquirirse mediante la realización de actividades. El proceso de evaluación se implementa en una sucesión discreta de puntos de medida que denominamos "momentos de evaluación" y que consisten en un conjunto de actividades que son necesarias para el desarrollo del proceso. Y basándonos en la existencia de una relación de orden parcial entre los distintos contenidos de un dominio curricular, podemos trazar un grafo dirigido con varias cadenas de tópicos que representan, de una forma natural, la progresión del alumnado para alcanzar el perfil de competencias objetivo. Proponemos un nuevo procedimiento de evaluación continua introduciendo un modelo activo/retroactivo, basado en las cadenas de tópicos antes citadas, que favorece la identificación de aquellas competencias que se han y que no se han alcanzado de una forma adecuada. Con esta idea presente, sugerimos la introducción de un impacto retroactivo sobre los conocimientos base de estas competencias ya evaluados en la(s) correspondiente(s) cadena(s) de tópicos diseñadas. Es más, este impacto retroactivo podría ser más relevante mediante la introducción de un amplificador de impacto calificador como un procedimiento de evaluación continua fundamentado en la mayor experiencia y conocimiento acumulado del alumno conforme avanza el desarrollo del curso. En general, cualquier asignatura se compone de distintos tópicos y cada tópico se desarrolla mediante la ejecución, con distinta relevancia, de una serie de actividades. Estas relaciones pueden ser representadas mediante matrices de tres dimensiones a las que hemos llamado cuboides ATC, los cuales se implementan mediante el uso de una evaluación binaria que verifica en las actividades cada una de las competencias básicas y las califica con un indicador verdadero/falso obteniendo una estructura matricial del rendimiento del alumnado en el curso, lo que nos permitirá diseñar estrategias curriculares. Desarrollaremos los mencionados cuboides ATC para una muestra de estudiantes y los compararemos con los resultados obtenidos con un método más tradicional utilizado en el grado de Ingeniería Aeroespacial en la Escuela Técnica Superior de Ingeniería del Diseño, ETSID, en la Universitat Politècnica de València. / [CA] El procés de Bolònia anima la transició de l'educació superior des d'un model basat en l'adquisició del coneixement a un model que prima la comprensió de l'acompliment i des d'un model centrat en l'ensenyament a un model centrat en l'estudiant a través dels resultats de l'aprenentatge. Una avaluació centrada en l'alumne significa que l'estudiant analitza activament el seu propi aprenentatge amb criteris concrets sobre nivells de desenvolupament en un entorn on obté feedback de forma immediata, freqüent i formativa. El fonament d'aquesta tesi consisteix en la introducció de conjunts molt diversos d'activitats en el procés d'avaluació amb l'objectiu d'enriquir-lo globalment i apropar-lo al procés d'aprenentatge. L'avaluació contínua es perfila com un dels mitjans més precisos d'executar el procés d'avaluació tenint en compte que les competències es poden adquirir mitjançant la realització d'activitats. El procés d'avaluació s'implementa en una successió discreta de punts de mesura que denominem "moments d'avaluació" i que consisteixen en un conjunt d'activitats que són necessàries per al desenvolupament del procés. I basant-nos en l'existència d'una relació d'ordre parcial entre els diferents continguts d'un domini curricular, podem traçar un graf dirigit amb diverses cadenes de tòpics que representen, d'una forma natural, la progressió de l'alumne per assolir el perfil de competències objectiu. Proposem un nou procediment d'avaluació contínua introduint-hi un model actiu/retroactiu, basat en les cadenes de tòpics abans esmentades, que afavoreix la identificació d'aquelles competències que s'han assolit i també de les que no s'han assolit d'una manera adequada. Amb aquesta idea present, suggerim la introducció d'un impacte retroactiu sobre els coneixements base d'aquestes competències ja avaluats en la(les) corresponent(-s) cadena(-es) de tòpics dissenyades. I encara més, aquest impacte retroactiu podria ser més rellevant mitjançant la introducció d'un amplificador d'impacte qualificador com un procediment d'avaluació contínua fonamentat en la major experiència i coneixement acumulat de l'alumne a mesura que avança el desenvolupament del curs. En general, qualsevol assignatura es compon de diferents tòpics i cada tòpic es desenvolupa mitjançant l'execució, amb diferent rellevància, d'una sèrie d'activitats. Aquestes relacions entre activitats, tòpics i competències poden ser representades mitjançant matrius de tres dimensions a les que hem anomenat cuboides ATC. Els cuboides ATC s'implementen mitjançant l'ús d'una avaluació binària que verifica en les activitats cadascuna de les competències bàsiques i les qualifica amb un indicador veritable/fals. Així, obtenim una estructura matricial del rendiment de l'alumne en el curs, la qual cosa ens permetrà dissenyar estratègies curriculars individualitzades adaptades a les necessitats particulars de cada estudiant amb l'objectiu que assoleixin el nivell requerit en cadascuna de les competències. Desenvoluparem els esmentats cuboides ATC per a una mostra d'estudiants i els compararem amb els resultats obtinguts amb un mètode més tradicional utilitzat en el grau d'Enginyeria Aeroespacial a l'Escola Tècnica Superior d'Enginyeria del Disseny, ETSED, a la Universitat Politècnica de València. / Mínguez Aroca, FD. (2016). Computational assessing model based on performance and dynamic assignment of curriculum contents [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/61781
689

Разработка метода прогнозирования селевых потоков на основе технологии глубокого обучения : магистерская диссертация / Development of debris flow forecasting method based on deep learning technology

Ян, Х., Yang, H. January 2024 (has links)
Для решения проблемы низкой точности, слабой адаптивности и плохой интерпретируемости существующих моделей прогнозирования опасности схода грязевых потоков предлагается новый метод прогнозирования. В качестве примера рассматриваются 159 точек бедствий в бассейне реки Нуцзян в Китае. Выбраны 15 факторов влияния, и с использованием метода комбинированного взвешивания тремя сторонами проводится оценка опасности точек риска схода грязевых потоков. Затем для прогнозирования опасности схода грязевых потоков используется модель CNN-BiGRU-Attention. Для оптимизации гиперпараметров применяется улучшенный алгоритм KOA (IKOA). В конечном итоге для повышения интерпретируемости результатов прогнозирования модели введена рамка SHAP. Результаты показывают, что по сравнению с 13 текущими наиболее часто используемыми моделями прогнозирования, модель IKOA-CNN-BiGRU-Attention демонстрирует наилучшие результаты прогнозирования. / To address the issues of low accuracy, poor adaptability, and weak interpretability in existing models for predicting debris flow hazards, a new prediction method is proposed. Using 159 disaster points in the Nujiang River Basin in China as a case study, 15 influencing factors are selected, and a tripartite combined weighting method is used to evaluate the risk levels of debris flow points. Subsequently, the CNN-BiGRU-Attention model is used to predict the hazard of debris flows. The improved KOA algorithm (IKOA) is employed for hyperparameter optimization. Finally, the SHAP framework is introduced to enhance the interpretability of the model's prediction results. The results show that compared to the 13 currently commonly used prediction models, the IKOA-CNN-BiGRU-Attention model exhibits the best predictive performance.
690

Two-stage combinatorial optimization framework for air traffic flow management under constrained capacity

Kim, Bosung 08 June 2015 (has links)
Air traffic flow management is a critical component of air transport operations because at some point in time, often very frequently, one of more of the critical resources in the air transportation network has significantly reduced capacity, resulting in congestion and delay for airlines and other entities and individuals who use the network. Typically, these “bottlenecks” are noticed at a given airport or terminal area, but they also occur in en route airspace. The two-stage combinatorial optimization framework for air traffic flow management under constrained capacity that is presented in this thesis, represents a important step towards the full consideration of the combinatorial nature of air traffic flow management decision that is often ignored or dealt with via priority-based schemes. It also illustrates the similarities between two traffic flow management problems that heretofore were considered to be quite distinct. The runway systems at major airports are highly constrained resources. From the perspective of arrivals, unnecessary delays and emissions may occur during peak periods when one or more runways at an airport are in great demand while other runways at the same airport are operating under their capacity. The primary cause of this imbalance in runway utilization is that the traffic flow into and out of the terminal areas is asymmetric (as a result of airline scheduling practices), and arrivals are typically assigned to the runway nearest the fix through which they enter the terminal areas. From the perspective of departures, delays and emissions occur because arrivals take precedence over departures with regard to the utilization of runways (despite the absence of binding safety constraints), and because arrival trajectories often include level segments that ensure “procedural separation” from arriving traffic while planes are not allowed to climb unrestricted along the most direct path to their destination. Similar to the runway systems, the terminal radar approach control facilities (TRACON) boundary fixes are also constrained resources of the terminal airspace. Because some arrival traffic from different airports merges at an arrival fix, a queue for the terminal areas generally starts to form at the arrival fix, which are caused by delays due to heavy arriving traffic streams. The arrivals must then absorb these delays by path stretching and adjusting their speed, resulting in unplanned fuel consumption. However, these delays are often not distributed evenly. As a result, some arrival fixes experience severe delays while, similar to the runway systems, the other arrival fixes might experience no delays at all. The goal of this thesis is to develop a combined optimization approach for terminal airspace flow management that assigns a TRACON boundary fix and a runway to each flight while minimizing the required fuel burn and emissions. The approach lessens the severity of terminal capacity shortage caused by and imbalance of traffic demand by shunting flights from current positions to alternate runways. This is done by considering every possible path combination. To attempt to solve the congestion of the terminal airspace at both runways and arrival fixes, this research focuses on two sequential optimizations. The fix assignments are dealt with by considering, simultaneously, the capacity constraints of fixes and runways as well as the fuel consumption and emissions of each flight. The research also develops runway assignments with runway scheduling such that the total emissions produced in the terminal area and on the airport surface are minimized. The two-stage sequential framework is also extended to en route airspace. When en route airspace loses its capacity for any reason, e.g. severe weather condition, air traffic controllers and flight operators plan flight schedules together based on the given capacity limit, thereby maximizing en route throughput and minimizing flight operators' costs. However, the current methods have limitations due to the lacks of consideration of the combinatorial nature of air traffic flow management decision. One of the initial attempts to overcome these limitations is the Collaborative Trajectory Options Program (CTOP), which will be initiated soon by the Federal Aviation Administration (FAA). The developed two-stage combinatorial optimization framework fits this CTOP perfectly from the flight operator's perspective. The first stage is used to find an optimal slot allocation for flights under satisfying the ration by schedule (RBS) algorithm of the FAA. To solve the formulated first stage problem efficiently, two different solution methodologies, a heuristic algorithm and a modified branch and bound algorithm, are presented. Then, flights are assigned to the resulting optimized slots in the second stage so as to minimize the flight operator's costs.

Page generated in 0.0629 seconds