• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5539
  • 2202
  • 1169
  • 543
  • 376
  • 294
  • 236
  • 152
  • 141
  • 103
  • 100
  • 88
  • 73
  • 71
  • 32
  • Tagged with
  • 13026
  • 1941
  • 1550
  • 1436
  • 1323
  • 1166
  • 1148
  • 1087
  • 916
  • 899
  • 884
  • 731
  • 699
  • 661
  • 644
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Meta-heurísticas híbridas multi-objetivo en problemas de secuenciación de operaciones bajo un entorno job-shop

Frutos, Mariano 05 November 2010 (has links)
La planificación, programación y control de la producción se encargan de diseñar, coordinar y administrar todas las operaciones que se hallan presentes en la explotación de los sistemas productivos. En las últimas décadas muchos problemas de optimización multi-objetivo han surgido en este ámbito y fueron tratados con éxito con técnicas de resolución basadas en metaheurísticas, en general, y con algoritmos evolutivos, en particular. Dichas técnicas, sin descartar otras, constituyen potentes herramientas para tratar adecuadamente estos problemas en el marco de las operaciones productivas y logísticas. La complejidad que se presenta en estos problemas se debe a los criterios de eficiencia que se imponen a los distintos sistemas productivos. Este estudio se extiende al desarrollo y análisis de un procedimiento, enmarcado en la estructura particular de un algoritmo evolutivo para resolver el problema de secuenciación de trabajos (JSSP, Job-Shop Scheduling Problem). Además, se explora la vecindad de las distintas soluciones dentro de la misma evolución, lo que mejora significativamente los resultados. El algoritmo propuesto utiliza una codificación basada en asignación de secuencias para poder establecer permutaciones con repeticiones. Esta representación, muy sencilla y compacta, distingue a este estudio de otros. Luego de finalizar el procedimiento, se adiciona una etapa de simulación a modo de examinar las soluciones obtenidas en la etapa resolutiva. Se realiza un análisis comparativo con otros algoritmos para constatar la eficiencia del procedimiento. Finalmente, se presenta una aplicación de la técnica en varias empresas, lo que permitió contrastar los resultados obtenidos con la realidad. / The study of planning, programming and controlling production processes yields methods of design, coordination and management of the operations involved in productive systems. In the last decades several multi-objective optimization problems have arisen in those studies and have been solved successfully with techniques based on meta-heuristics, in general, and evolutionary algorithms in particular. These techniques, notwithstanding the existence of other useful tools, provide the adequate means for treating those optimization problems in the realm of logistics and operations analysis. The complexity of the problems stems from the efficiency criteria imposed over the solution candidates. This work extends those tools to the development and analysis of an evolutionary procedure aimed to solving the Job-Shop Scheduling Problem (JSSP). We explore ways of improving solutions in the vicinity of already found candidates. The algorithm generates sequences that allow permutations with repetition. This representation, simple enough, distinguishes this study from others. After running the procedure, a comparison with other algorithms is presented, based on simulations. Finally, the algorithm is applied to analyze real-world applications.
242

Perceptual Agreement Between Multi-rater Feedback Sources in the Federal Bureau of Investigation

Corderman, David Sandt 04 May 2004 (has links)
The use of multi-rater feedback as a way to analyze perceptions within the context of job performance and leadership in the Federal Bureau of Investigation (FBI) was examined. Research in this domain is notable as this type of evaluation is now being done with regularity in the private sector and is starting to be utilized more extensively in the public sector, but is still being used to a limited extent in law enforcement. The path of this research examined differences between self-assessments and assessments of others (peers and subordinates) in dimensions of leadership as measured by the same multi-rater instrument at two points in time. This research effort made use of a multi-rater survey instrument called the "Leadership Commitments and Credibility Inventory System (LCCIS)," designed by Keilty, Goldsmith, and Company, which is used in multiple industries and was expanded to capture characteristics considered important to FBI leaders. Results showed high ratings on a five point Likert scale as indicated by mean averages of self and others. Additionally, Z scores, t tests and ANCOVA indicated that FBI supervisors did not overestimate their leadership, as indicated by (1) an overall leadership measure at time two compared to time one, (2) a greater perceptual agreement between others and self existing on second multi-rater assessments than on the initial assessments, and (3) any statistical differences of means in all measured categories at time two versus time one. Various subcategories of the assessment showed a mixture of non-statistically significant results and that subordinates and peers perceived leaders differently. Further, analysis of two unique dimensions of the LCCIS, "Manage Diversity" and "Build Public Trust" showed exceptionally high results. The implications of the present research are that leadership in the FBI, as measured by different dimensions, is strong. Yet, there is no evidence that leaders or others in this organization change their perceptions over time. These findings may point to the need for multi-rater instruments to be used in concert with personal development plans in order to improve the perception of leadership. / Ph. D.
243

Multi-dimensional Flow and Combustion Diagnostics

Li, Xuesong 10 June 2014 (has links)
Turbulent flows and turbulent flames are inherently multi-dimensional in space and transient in time. Therefore, multidimensional diagnostics that are capable of resolving such spatial and temporal dynamics have long been desired; and the purpose of this dissertation is to investigate three such diagnostics both for the fundamental study of flow and combustion processes and also for the applied research of practical devices. These multidimensional optical diagnostics are a 2D (two dimensional) two-photon laser-induced fluorescence (TPLIF) technique, a 3D hyperspectral tomography (HT) technique, and a 4D tomographic chemiluminescence (TC) technique. The first TPLIF technique is targeted at measuring temporally-resolved 2D distribution of fluorescent radicals, the second HT technique is targeted at measuring temperature and chemical species concentration at high speed, and the third TC technique is targeted at measuring turbulent flame properties. This dissertation describes the numerical and experimental evaluation of these techniques to demonstrate their capabilities and understand their limitations. The specific aspects investigated include spatial resolution, temporal resolution, and tomographic inversion algorithms. It is expected that the results obtained in this dissertation to lay the groundwork for their further development and expanded application in the study of turbulent flow and combustion processes. / Ph. D.
244

Sensorium: The Sum of Perception

Irizarry, Yoeldi B. 28 September 2017 (has links)
We live in a world full of stimuli. We can see, smell, feel, taste and hear because stimuli surrounds us. However, when we are conceived in the womb of our mothers we are formed with no senses. During that time we are totally isolated from our environment. Interestingly enough senses start to develop only after 8 weeks of fetal development, being touch the first one to mature. Smell, taste, hearing and sight appear later on. Humans connect to their surroundings through senses, and as these senses start developing in our bodies our brain starts applying them to perceive our environment. Through our senses we are able to interact with our environment and we are able to learn, pass on knowledge, and form, create and treasure memories. It is because of our senses that we can enjoy the beautiful colors of autumn, the balmy breeze of late summer days, or the avian symphony of spring. Each sense is like a link through which we connect our inner self with the outside world and allows us to uniquely experience each setting. However, when one or more of the senses is missing, those links are broken and the outside world is perceived very differently from individual to individual. Experiencing the built environment is no different. Since buildings are usually designed with a fully sensory individual in mind sensory-impaired populations typically find it difficult to navigate or make use of the spaces the building offers. The following pages of this thesis demonstrate the universal access system as a tool for those who lack one or more of the senses in order for them to fully enjoy and use the spaces in the same way any fully sensorial person can. Another important aspect which is explored architectonically is the aspect of social inequalities, which many handicapped individuals face on regular basis as users of a building. / Master of Architecture
245

Integrated multi-spectral imaging, analysis and treatment of an Egyptian tunic.

Haldane, E.A., Gillies, Sara, O'Connor, Sonia A., Batt, Catherine M., Stern, Ben January 2010 (has links)
No
246

Improving broadcast performance in multi-radio multi-channel multi-rate wireless mesh networks.

Qadir, Junaid, Computer Science & Engineering, Faculty of Engineering, UNSW January 2008 (has links)
This thesis addresses the problem of `efficient' broadcast in a multi-radio multi-channel multi-rate wireless mesh network (MR$^2$-MC WMN). In such a MR$^2$-MC WMN, nodes are equipped with multiple radio network interface cards, each tuned to an orthogonal channel, that can dynamically adjust transmission rate by choosing a modulation scheme appropriate for the channel conditions. We choose `broadcast latency', defined as the maximum delay between a packet's network-wide broadcast at the source and its eventual reception at all network nodes, as the `efficiency' metric of broadcast performance. The problem of constructing a broadcast forwarding structure having minimal broadcast latency is referred to as the `minimum-latency-broadcasting' (MLB) problem. While previous research for broadcast in single-radio single-rate wireless networks has highlighted the wireless medium's `\emph{wireless broadcast advantage}' (WBA); little is known regarding how the new features of MR$^2$-MC WMN may be exploited. We study in this thesis how the availability of multiple radio interfaces (tuned to orthogonal channels) at WMN nodes, and WMN's multi-rate transmission capability and WBA, might be exploited to improve the `broadcast latency' performance. We show the MLB problem for MR$^2$-MC WMN to be NP-hard, and resort to heuristics for its solution. We divide the overall problem into two sub-problems, which we address in two separate parts of this thesis. \emph{In the first part of this thesis}, the MLB problem is defined for the case of single-radio single-channel multi-rate WMNs where WMN nodes are equipped with a single radio tuned to a common channel. \emph{In the second part of this thesis}, the MLB problem is defined for MR$^2$-MC WMNs where WMN nodes are equipped with multiple radios tuned to multiple orthogonal channels. We demonstrate that broadcasting in multi-rate WMNs is significantly different to broadcasting in single-rate WMNs, and that broadcast performance in multi-rate WMNs can be significantly improved by exploiting the availability of multi-rate feature and multiple interfaces. We also present two alternative MLB broadcast frameworks and specific algorithms, centralized and distributed, for each framework that can exploit multiple interfaces at a WMN node, and the multi-rate feature and WBA of MR$^2$-MC WMN to return improved `broadcast latency' performance.
247

An Evolutionary Algorithm For Multiple Criteria Problems

Soylu, Banu 01 January 2007 (has links) (PDF)
In this thesis, we develop an evolutionary algorithm for approximating the Pareto frontier of multi-objective continuous and combinatorial optimization problems. The algorithm tries to evolve the population of solutions towards the Pareto frontier and distribute it over the frontier in order to maintain a well-spread representation. The fitness score of each solution is computed with a Tchebycheff distance function and non-dominating sorting approach. Each solution chooses its own favorable weights according to the Tchebycheff distance function. Some seed solutions at initial population and a crowding measure also help to achieve satisfactory results. In order to test the performance of our evolutionary algorithm, we use some continuous and combinatorial problems. The continuous test problems taken from the literature have special difficulties that an evolutionary algorithm has to deal with. Experimental results of our algorithm on these problems are provided. One of the combinatorial problems we address is the multi-objective knapsack problem. We carry out experiments on test data for this problem given in the literature. We work on two bi-criteria p-hub location problems and propose an evolutionary algorithm to approximate the Pareto frontiers of these problems. We test the performance of our algorithm on Turkish Postal System (PTT) data set (TPDS), AP (Australian Post) and CAB (US Civil Aeronautics Board) data sets. The main contribution of this thesis is in the field of developing a multi-objective evolutionary algorithm and applying it to a number of multi-objective continuous and combinatorial optimization problems.
248

Improving broadcast performance in multi-radio multi-channel multi-rate wireless mesh networks.

Qadir, Junaid, Computer Science & Engineering, Faculty of Engineering, UNSW January 2008 (has links)
This thesis addresses the problem of `efficient' broadcast in a multi-radio multi-channel multi-rate wireless mesh network (MR$^2$-MC WMN). In such a MR$^2$-MC WMN, nodes are equipped with multiple radio network interface cards, each tuned to an orthogonal channel, that can dynamically adjust transmission rate by choosing a modulation scheme appropriate for the channel conditions. We choose `broadcast latency', defined as the maximum delay between a packet's network-wide broadcast at the source and its eventual reception at all network nodes, as the `efficiency' metric of broadcast performance. The problem of constructing a broadcast forwarding structure having minimal broadcast latency is referred to as the `minimum-latency-broadcasting' (MLB) problem. While previous research for broadcast in single-radio single-rate wireless networks has highlighted the wireless medium's `\emph{wireless broadcast advantage}' (WBA); little is known regarding how the new features of MR$^2$-MC WMN may be exploited. We study in this thesis how the availability of multiple radio interfaces (tuned to orthogonal channels) at WMN nodes, and WMN's multi-rate transmission capability and WBA, might be exploited to improve the `broadcast latency' performance. We show the MLB problem for MR$^2$-MC WMN to be NP-hard, and resort to heuristics for its solution. We divide the overall problem into two sub-problems, which we address in two separate parts of this thesis. \emph{In the first part of this thesis}, the MLB problem is defined for the case of single-radio single-channel multi-rate WMNs where WMN nodes are equipped with a single radio tuned to a common channel. \emph{In the second part of this thesis}, the MLB problem is defined for MR$^2$-MC WMNs where WMN nodes are equipped with multiple radios tuned to multiple orthogonal channels. We demonstrate that broadcasting in multi-rate WMNs is significantly different to broadcasting in single-rate WMNs, and that broadcast performance in multi-rate WMNs can be significantly improved by exploiting the availability of multi-rate feature and multiple interfaces. We also present two alternative MLB broadcast frameworks and specific algorithms, centralized and distributed, for each framework that can exploit multiple interfaces at a WMN node, and the multi-rate feature and WBA of MR$^2$-MC WMN to return improved `broadcast latency' performance.
249

Load Balancing of Multi-physics Simulation by Multi-criteria Graph Partitioning / Equilibrage de charge pour des simulations multi-physiques par partitionnement multcritères de graphes

Barat, Remi 18 December 2017 (has links)
Les simulations dites multi-physiques couplent plusieurs phases de calcul. Lorsqu’elles sont exécutées en parallèle sur des architectures à mémoire distribuée, la minimisation du temps de restitution nécessite dans la plupart des cas d’équilibrer la charge entre les unités de traitement, pour chaque phase de calcul. En outre, la distribution des données doit minimiser les communications qu’elle induit. Ce problème peut être modélisé comme un problème de partitionnement de graphe multi-critères. On associe à chaque sommet du graphe un vecteur de poids, dont les composantes, appelées « critères », modélisent la charge de calcul porté par le sommet pour chaque phase de calcul. Les arêtes entre les sommets, indiquent des dépendances de données, et peuvent être munies d’un poids reflétant le volume de communication transitant entre les deux sommets. L’objectif est de trouver une partition des sommets équilibrant le poids de chaque partie pour chaque critère, tout en minimisant la somme des poids des arêtes coupées, appelée « coupe ». Le déséquilibre maximum toléré entre les parties est prescrit par l’utilisateur. On cherche alors une partition minimisant la coupe, parmi toutes celles dont le déséquilibre pour chaque critère est inférieur à cette tolérance. Ce problème étant NP-Dur dans le cas général, l’objet de cette thèse est de concevoir et d’implanter des heuristiques permettant de calculer efficacement de tels partitionnements. En effet, les outils actuels renvoient souvent des partitions dont le déséquilibre dépasse la tolérance prescrite. Notre étude de l’espace des solutions, c’est-à-dire l’ensemble des partitions respectant les contraintes d’équilibre, révèle qu’en pratique, cet espace est immense. En outre, nous prouvons dans le cas mono-critère qu’une borne sur les poids normalisés des sommets garantit que l’espace des solutions est non-vide et connexe. Nous fondant sur ces résultats théoriques, nous proposons des améliorations de la méthode multi-niveaux. Les outils existants mettent en oeuvre de nombreuses variations de cette méthode. Par l’étude de leurs codes sources, nous mettons en évidence ces variations et leurs conséquences à la lumière de notre analyse sur l’espace des solutions. Par ailleurs, nous définissons et implantons deux algorithmes de partitionnement initial, se focalisant sur l’obtention d’une solution à partir d’une partition potentiellement déséquilibrée, au moyen de déplacements successifs de sommets. Le premier algorithme effectue un mouvement dès que celui-ci améliore l’équilibre, alors que le second effectue le mouvement réduisant le plus le déséquilibre. Nous présentons une structure de données originale, permettant d’optimiser le choix des sommets à déplacer, et conduisant à des partitions de déséquilibre inférieur en moyenne aux méthodes existantes. Nous décrivons la plate-forme d’expérimentation, appelée Crack, que nous avons conçue afin de comparer les différents algorithmes étudiés. Ces comparaisons sont effectuées en partitionnant un ensembles d’instances comprenant un cas industriel et plusieurs cas fictifs. Nous proposons une méthode de génération de cas réalistes de simulations de type « transport de particules ». Nos résultats démontrent la nécessité de restreindre les poids des sommets lors de la phase de contraction de la méthode multi-niveaux. En outre, nous mettons en évidence l’influence de la stratégie d’ordonnancement des sommets, dépendante de la topologie du graphe, sur l’efficacité de l’algorithme d’appariement « Heavy-Edge Matching » dans cette même phase. Les différents algorithmes que nous étudions sont implantés dans un outil de partitionnement libre appelé Scotch. Au cours de nos expériences, Scotch et Crack renvoient une partition équilibrée à chaque exécution, là où MeTiS, l’outil le plus utilisé actuellement, échoue une grande partie du temps. Qui plus est, la coupe des solutions renvoyées par Scotch et Crack est équivalente ou meilleure que celle renvoyée par MeTiS. / Multiphysics simulation couple several computation phases. When they are run in parallel on memory-distributed architectures, minimizing the simulation time requires in most cases to balance the workload across computation units, for each computation phase. Moreover, the data distribution must minimize the induced communication. This problem can be modeled as a multi-criteria graph partitioning problem. We associate with each vertex of the graph a vector of weights, whose components, called “criteria”, model the workload of the vertex for each computation phase. The edges between vertices indicate data dependencies, and can be given a weight representing the communication volume transferred between the two vertices. The goal is to find a partition of the vertices that both balances the weights of each part for each criterion, and minimizes the “edgecut”, that is, the sum of the weights of the edges cut by the partition. The maximum allowed imbalance is provided by the user, and we search for a partition that minimizes the edgecut, among all the partitions whose imbalance for each criterion is smaller than this threshold. This problem being NP-Hard in the general case, this thesis aims at devising and implementing heuristics that allow us to compute efficiently such partitions. Indeed, existing tools often return partitions whose imbalance is higher than the prescribed tolerance. Our study of the solution space, that is, the set of all the partitions respecting the balance constraints, reveals that, in practice, this space is extremely large. Moreover, we prove in the mono-criterion case that a bound on the normalized vertex weights guarantees the existence of a solution, and the connectivity of the solution space. Based on these theoretical results, we propose improvements of the multilevel algorithm. Existing tools implement many variations of this algorithm. By studying their source code, we emphasize these variations and their consequences, in light of our analysis of the solution space. Furthermore, we define and implement two initial partitioning algorithms, focusing on returning a solution. From a potentially imbalanced partition, they successively move vertices from one part to another. The first algorithm performs any move that reduces the imbalance, while the second performs at each step the move reducing the most the imbalance. We present an original data structure that allows us to optimize the choice of the vertex to move, and leads to partitions of imbalance smaller on average than existing methods. We describe the experimentation framework, named Crack, that we implemented in order to compare the various algorithms at stake. This comparison is performed by partitioning a set of instances including an industrial test case, and several fictitious cases. We define a method for generating realistic weight distributions corresponding to “Particles-in-Cells”-like simulations. Our results demonstrate the necessity to coerce the vertex weights during the coarsening phase of the multilevel algorithm. Moreover, we evidence the impact of the vertex ordering, which should depend on the graph topology, on the efficiency of the “Heavy-Edge” matching scheme. The various algorithms that we consider are implemented in an open- source graph partitioning software called Scotch. In our experiments, Scotch and Crack returned a balanced partition for each execution, whereas MeTiS, the current most used partitioning tool, fails regularly. Additionally, the edgecut of the solutions returned by Scotch and Crack is equivalent or better than the edgecut of the solutions returned by MeTiS.
250

Take time to make time : What to consider when managing multi-channel sales systems with the objective to increase sales efficiency / Ta tid att göra tid : Vad bör beaktas vid hantering av multi-kanal säljsystem med målsättningen att öka säljeffektivitet

ALM, RAGNAR, KYRÖNLAHTI, RUDY January 2016 (has links)
Traditional sales systems have been disrupted by technological developments. In order to  adapt, companies are changing the way they interact with their customers in business-to-business markets. In the last three decades, multi-channel strategies have spurred the proliferation of different sales channels and new ways of managing sales systems. The purpose of this research was to investigate what should be considered when managing multi-channel sales systems with the objective of increasing sales efficiency. The study has investigated current utilisation of multi-channel sales systems in the context of a business-to-business setting in industrial companies that are involved in the Swedish automotive industry. Multi-channel sales systems can be utilised to achieve many different objectives. However, this research pays specific attention on how to improve sales efficiency by utilising multi-channel sales systems in the context of a business-to-business setting. The research employed an explorative case study, where semi-structured and structured interviews were conducted at a case company and at companies that are first or second tier suppliers in the Swedish automotive industry. The qualitative data were analysed using thematic analysis. The empirical findings indicate that the most prevalent measure for increasing sales efficiency is to prioritise and allocate customers based on economic attractiveness. Furthermore, the key issues that impede sales efficiency in multi-channels sales system are misaligned sales activities, deficient prioritisation procedures, insufficient promotion of customer value and inadequate focus on customers. The findings highlight key areas to address and may provide guidelines for the design and management of multi-channel sales systems with the specific purpose of obtaining sales efficiency. The implications of this research are mainly practical and are aimed at supporting sales managers, or individuals in similar positions engaged in multi-channel sales system design and management, in obtaining sales efficiency. Managers should focus on aligning sales activities across the whole  sales system, allocate customers according to prioritisation and stay in line with market developments by understanding customer behaviours and perceptions.

Page generated in 0.115 seconds