• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 199
  • 46
  • 21
  • 19
  • 16
  • 15
  • 13
  • 13
  • 11
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 420
  • 73
  • 72
  • 54
  • 50
  • 47
  • 47
  • 47
  • 40
  • 39
  • 36
  • 35
  • 33
  • 33
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Semantic based cloud broker architecture optimizing users satisfaction / Une architecture de cloud broker basée sur la sémantique pour l'optimisation de la satisfaction des utilisateurs

Fakhfakh, Inès 07 May 2015 (has links)
Le Cloud Computing est un nouveau modèle économique hébergeant les applications de la technologie de l’information. Le passage au Cloud devient un enjeu important des entreprises pour des raisons économiques. La nature dynamique et la complexité croissante des architectures de Cloud impliquent plusieurs défis de gestion. Dans ce travail, nous nous intéressons à la gestion des contrats SLA. Vu le manque de standardisation, chaque fournisseur de service décrit les contrats SLA avec son propre langage, ce qui laisse l'utilisateur perplexe concernant le choix de son fournisseur de services. Dans ce travail, nous proposons une architecture de Cloud Broker permettant d’établir et de négocier les contrats SLA entre les fournisseurs et les consommateurs du Cloud. L’objectif de cette architecture est d’aider l’utilisateur à trouver le meilleur fournisseur en utilisant une méthode multi-critère. Cette méthode considère chaque critère comme une fonction d’utilité à intégrer dans une super-fonction d’utilité. Nous proposons d’illustrer chaque fonction d’utilité par une courbe spécifique à lui représentant bien le critère de choix. Nous essayons de cerner la plupart des critères qui contribuent dans le choix du meilleurs service et de les classer en critères fonctionnels et critères non fonctionnels. Les contrats SLA établit par notre broker sont formalisés sous forme d’ontologies qui permettent de masquer l'hétérogénéité et d’assurer l'interopérabilité entre les acteurs du Cloud. En outre, l’utilisation des règles d'inférence nous a permis de détecter les violations dans le contrat SLA établit et de garantir ainsi le respect de la satisfaction client dans le temps / Cloud Computing is a dynamic new technology that has huge potentials in enterprises and markets. The dynamicity and the increasing complexity of Cloud architectures involve several management challenges. In this work, we are interested in Service Level Agreement (SLA) management. Actually, there is no standard to express Cloud SLA, so, providers describe their SLAs in different manner and different languages, which leaves the user puzzled about the choice of its Cloud provider. To overcome these problems, we introduce a Cloud Broker Architecture managing the SLA between providers and consumers. It aims to assist users in establishing and negotiating SLA contracts and to help them in finding the best provider that satisfies their service level expectations. Our broker SLA contracts are formalized as OWL ontologies as they allow hiding the heterogeneity in the distributed Cloud environment and enabling interoperability between Cloud actors. Besides, by combining our ontology with our proposed inference rules, we contribute to detect violations in the SLA contract assuring thereby the sustainability of the user satisfaction. Based on the requirements specified in the SLA contract, our Cloud Broker assists users in selecting the right provider using a multi attribute utility theory method. This method is based on utility functions representing the user satisfaction degree. To obtain accurate results, we have modelled both functional and non functional attributes utilities. We have used personalized utilities for each criterion under negotiation so that our cloud broker satisfies the best consumer requirements from functional and non functional point of view
322

Vidéoconférence basée sur les ressources internes de l'entreprise / Video conference based on enterprise desktop grid

Sorokin, Roman 24 February 2017 (has links)
Il existe deux approches classiques et bien comprises des tâches de traitement vidéo pour la vidéoconférence. Le premier utilise une unité centralisée de contrôle multipoint. Dans la deuxième approche, les tâches de traitement vidéo sont directement traitées dans les clients. La performance est ensuite limitée par les caractéristiques du périphérique. Dans cette thèse, nous proposons une troisième approche alternative. Nous proposons un système qui distribue des tâches de traitement vidéo en temps réel sur les ressources internes de l'entreprise. Une méthode dédiée de prise de décision basée sur les multi-attributs est conçue pour tenir compte de la variété des attributs qui influent sur la qualité de l'expérience. Des algorithmes de distribution et de redistribution de tâches sont élaborés. Nous testons ensuite l'approche proposée au moyen de la simulation afin d'étudier l'impact des principaux paramètres critiques. L'approche proposée pose une question sur laquelle un PC peut être utilisé comme plate-forme pour le serveur multimédia. Nous estimons une qualité perçue des flux vidéo afin d'étudier l'influence de la charge CPU. Nous avons également élaboré des algorithmes, combinant l'approche Cloud / Fog avec différents types de serveurs multimédia. Le résultat fournit une solution de conférence optimisée en termes de coût tant pour le fournisseur que pour le consommateur, ainsi que pour l'expérience de l'utilisateur final. En combinant les algorithmes élaborés et l'architecture avec les résultats de l'expérimentation, nous concluons que la solution proposée peut être utilisée comme une nouvelle approche de la problématique de la vidéoconférence. / There exist two classical and well-understood approaches to video processing tasks for videoconferencing. The first one is using a centralized Multipoint Control Unit (MCU). In the second approach, the video processing tasks are directly handled in endpoints. Performance is then restricted by device characteristics, especially in the case of mobile devices. In this thesis, we propose a third alternative approach. We propose a system, which distributes real-time video processing tasks on enterprise desktop grid. A dedicated Multi Attribute Decision Making method is designed in order to take into account the variety of attributes impacting Quality of Experience. A number of task distribution and redistribution algorithms are elaborated. We then test the proposed approach by means of simulation in order to study the impact of the main critical parameters. The proposed approach arises a question to which extent a PC can be used as a platform for media server and how CPU load affects the quality of provided video conference. We estimate a perceived quality of video streams in order to investigate CPU load influence. Also we elaborated algorithms, combining Cloud/Fog approach with different types of media servers, the result provides an optimized conferencing solution in the terms of cost for both provider and consumer as well as in terms of end user experience. Combining elaborated algorithms and architecture with experimentation results we conclude that proposed solution can be used as a novel approach to video conferencing problematic.
323

Mentorship in Athletic Training: A Two-Phased Study

Stiltner, Sara 03 December 2019 (has links)
No description available.
324

”Tidigare sköt man i benet för att skrämmas, nu skjuter man i huvudet för att döda” : En kvalitativ innehållsanalys om hur gängkriminalitet framställs i massmedia / "Earlier they were shooting in the leg to scare, but now they are shooting in the head to kill" : A qualitative content analysis on how gang crime is presented in mass media.

Badan, Madeleine, Eriksson, Felicia January 2021 (has links)
Gängkriminalitet är ett omtalat ämne i massmedier och skapar i många fall uppseendeväckande rubriker. Vi har valt att utföra en kvalitativ tematisk innehållsanalys för att undersöka hur gängkriminalitet framställs genom ett begränsat urval av artiklar i massmedia eftersom tidigare forskning visar på svagt forskningsengagemang gällande massmedias budskap om brott och social oordning. Vidare undersöks även om och hur kvinnor i gängkriminalitet skildras i massmedia eftersom mannen används som norm inom kriminologin. Konstruktionismen används som vetenskaplig utgångspunkt där det konstateras att begrepp och kunskap är socialt konstruerade och att man därmed inte kan återge bilden av ett föremål på ett objektivt sätt. Resultatet analyseras med begrepp som härstammar från socialkonstruktionistisk teori och dagordningsteorin. Tidigare forskning visar hur massmedier väljer att framställa publikationer efter intressen vilket leder till konsekvenser för både individer och samhälle. För att uppnå vårt syfte har vi använt oss av 14 artiklar publicerade i tre av Sveriges största webbtidningar. Artiklarna har kodats genom ett systematiskt analysschema med tillämpning av fem olika perspektiv/teman. Vi har utgått från forskning kring begreppet gängkriminalitet, normer, massmedias funktion och effekter. Resultatet visar hur massmedia konstruerar bilden av gängkriminalitet genom attribut vilket i sin tur kan påverka allmänhetens verklighetsuppfattning om problemet. Massmedias skildring av gängkriminalitet kan leda till fördomar gentemot både sociala grupper och specifika områden vilket är sammanlänkat med känslor som oro och otrygghet hos allmänheten. / Gang crime is a featured topic in mass media and in many cases creates headlines. We have chosen to carry out a thematic analysis in order to be able to investigate how gang crime is presented through a limited selection of articles in the media, for the reason that previous research shows low research engagement regarding the media's message of crime and social disorder. Furthermore, it is also investigated if and how women in gang crime are portrayed in the media because the man is used as a norm in criminology. The study has been conducted with a constructionist point of view that is used as a scientific point of departure where it is stated that concepts and knowledge are socially constructed and therefore not possible to reproduce the image of an object in an objective way. The result was analyzed with concepts derived from social constructionist theory and agenda theory. Previous research shows how mass media chooses to produce publications according to interests, which has consequences for both areas of society and for social groups. To achieve our aim, we have used 14 articles published in three of Sweden's largest newspapers. The articles have been analyzed with a systematic analysis scheme with five themes. We have based our research on the concept of gang crime, the media's effects and norms. The result shows how the mass media constructs different realities through attributes for the public that can lead to prejudice towards both social groups and specific areas which is linked to feelings such as concerns and insecurities in the public.
325

Investigating "Lithic Scatter" Variability: Space, Time, and Form

Manning, Kate M 07 May 2016 (has links)
Using flake dimensions and attributes commonly agreed are associated with site use, occupation age, and occupation duration, it was argued that relative estimations of site function and occupation age could be determined using debitage. This is particularly beneficial for assemblages that have little to no diagnostics that could provide a general cultural period for one or more occupations at a site. The results of this study suggest that, although certain attributes are generally associated with lithic production stage, relative age, and duration indicators, they were not all applicable within this study. The methods employed were relatively successful; however, reducing the number of classes, removing of a dimension, and more sites that meet the definition of lithic scatter is needed. Furthermore, testing occupation duration using the number of breaks on a flake is not possible unless it is proven a single occupation site.
326

Optimering av vägdataleveranser till den nationella vägdatabasen med hjälp av byggnadsinformationsmodellering / Optimization of road data deliveries to the national road database using building information modeling

Atabas, Burak, Yildirim, Siho January 2019 (has links)
Trafikverket använder den nationella vägdabatasen, NVDB, för att samla in och hantera levererad vägdata. Den nationella vägdatabasen är därför väldigt avgörande för förvaltning och underhåll av vägar. Trafikverket driver projekt där nya vägar för fordon, cyklister och fotgängare byggs och slutligen tas i bruk. Information om vägdata från dessa projekt levereras sedan till NVDB och med hjälp av informationen kan de nya vägarna underhållas för att kunna behålla en lång livslängd. Enligt Trafikverket kan leveranser av vägdata från projekt till den nationella vägdatabasen fortfarande förbättras då de för närvarande inte är tillräckligt effektiva. Trafikverket har intresset att förbättra leveranser av vägdata till NVDB för att uppnå en högre och förenklad kontroll över Sveriges infrastruktur. Potentialen för ökad digitalisering och automatisering av hantering av vägdata finns för att Trafikverket ska kunna uppnå en mer hållbar infrastruktur med så låg miljöbelastning som möjligt. Syftet med arbetet är att hjälpa Trafikverket att uppnå en mer hållbar infrastruktur genom förenkling av underhåll av vägar. Arbetet är uppdelat i två frågeställningar varav den första frågeställningen belyser Trafikverkets kravställningar generellt för leverans av vägdata till NVDB från projekt för att undersöka om de är tillräckligt omfattande och tydliga. Den andra frågeställningen undersöker i en fallstudie hur leveranser av vägdata till NVDB i projektet E4 Förbifart Stockholm Lindvreten Norra kan optimeras. Metoderna som användes i studien är främst kvalitativa och inkluderar en litteraturstudie, en fallstudie och intervjuer. Även kvantitativa metoder användes genom dataanalys och framtagning av ett lösningskoncept, vilket testades i projektet med syfte att förenkla och effektivisera hanteringen av vägdata. Resultaten tyder på att Trafikverkets kravställningar för vägdataleveranser till NVDB från projekt har vissa brister. Ett antal åtgärder har föreslagits och en åtgärd är att varje vägdataleverans ska vara i ett fastställt filformat. Fortsättningsvis ska direktiven i kravställningen förtydligas och effektiviseras. Vägdata ska alltid innehålla X-, Y- och Z-koordinater samt vara konverterade till ett bestämt koordinatsystem. Inför varje projekt ska det i ett tidigt skede kravställas att vägens geometri representeras i en referenslinje med ett bestämt filformat. Referenslinjen ska vara anknuten till vägens tillhörande komponenter. Ytterligare erhållna resultat i studien har varit ett antal föreslagna åtgärder för optimering av vägdata som levereras till NVDB från projektet Lindvreten Norra. I första hand bör en referenslinjefil skapas som tidigare nämnt. Därefter ska en mall iordningställas och fyllas i med nödvändig information. Slutligen ska informationen i mallen integreras med geometrin i referenslinjefilen för att skapa en leveransfärdig fil i korrekt format på ett automatiserat sätt till NVDB. Avslutningsvis dras slutsatsen att Trafikverkets kravställningar för leverans av vägdata till NVDB från projekt är otydliga och alltför omfattande. Med hjälp av det framtagna lösningskonceptet kan vägdataleveranser till NVDB från projektet Lindvreten Norra optimeras. På så vis kan Trafikverkets underhållsavdelning få mer korrekta planeringsunderlag för vägarna och kan därmed upprätta underhållsplaner som bidrar till en mer hållbar infrastruktur. / The Swedish Transport Administration uses the national road database, NVDB, to collect and manage delivered road data. The national road database is therefore very important for management and maintenance of roads. The Swedish Transport Administration runs projects where new roads for vehicles, cyclists and pedestrians are built and then put into use. Information on road data from these projects is then delivered to NVDB and with the help of the information, the new roads can be maintained to retain a long lifespan. According to the Swedish Transport Administration, deliveries of road data from projects to the national road database can still be improved as they are currently not sufficiently efficient. The Swedish Transport Administration has an interest in improving deliveries of road data to NVDB in order to achieve an increased digitalization and simplified control over Sweden's road infrastructure. The potential for increased simplicity and automation of road data exists and so the Swedish Transport Administration would be able to achieve a more sustainable infrastructure which in turn reduces the environmental impact. The purpose of this work is to help the Swedish Transport Administration achieve a more sustainable infrastructure by simplifying the maintenance of roads. The work is divided into two objectives, of which the first questions whether the Swedish Transport Administration's requirements for general delivery of road data to NVDB from projects are sufficiently comprehensive and clear. The second objective is to investigate, through a case study, how deliveries of road data to NVDB in the project E4 Bypass Stockholm Lindvreten North can be optimized. The methods used for the study were mainly qualitative and included a literature study, a case study and interviews. Quantitative methods were also used through data analysis and the development of a solution concept, which was tested with the aim of simplifying the planning and maintenance processes. The results suggested that the Swedish Transport Administration's requirements for road data deliveries to NVDB from projects are inadequate. A number of measures are proposed where one is that each road data delivery should be in a fixed file format. Furthermore, the directives in the requirements must be clarified and more effective. Road data should also always contain X-, Y- and Z coordinates as well as being converted to a specific coordinate system. Before each project, it must be stated at an early stage that the road geometry should be represented in a reference line with a specific file format. The reference line should be linked to the associated components of the road. Furthermore, the study resulted in a number of proposed measures for optimizing road data deliveries to NVDB from the project Lindvreten North. Firstly, a reference line file should be created as previously mentioned. Secondly, a template must be prepared and then filled with mandatory information. Finally, the information in the template has to be integrated with the geometry of the reference line to create a file ready for delivery to NVDB in an automated manner. Lastly, it was concluded that the Swedish Transport Administration’s requirements for delivery of road data to NVDB from projects are inadequate and far too extensive. Using the solution concept developed, road data deliveries to NVDB from the Lindvreten North project can be optimized. In this way, necessary information will be available for the maintenance section in the Swedish Transport Administration. Hence, a maintenance plan can be established which contributes to a more sustainable infrastructure.
327

Numeric Conversions Ameliorate the Framing Effect

Sinayev, Aleksandr 11 July 2013 (has links)
No description available.
328

Seismic and Well Log Attribute Analysis of the Jurassic Entrada/Curtis Interval Within the North Hill Creek 3D Seismic Survey, Uinta Basin, Utah, A Case History

ONeal, Ryan J. 18 July 2007 (has links) (PDF)
3D seismic attribute analysis of the Jurassic Entrada/Curtis interval within the North Hill Creek (NHC) survey has been useful in delineating reservoir quality eolian-influenced dune complexes. Amplitude, average reflection strength and spectral decomposition appear to be most useful in locating reservoir quality dune complexes, outlining their geometry and possibly displaying lateral changes in thickness. Cross sectional views displaying toplap features likely indicate an unconformity between Entrada clinoforms below and Curtis planar beds above. This relationship may aid the explorationist in discovering this important seismic interval. Seismic and well log attribute values were cross plotted and have revealed associations between these data. Cross plots are accompanied by regression lines and R2 values which support our interpretations. Although reservoir quality dune complexes may be delineated, the Entrada/Curtis play appears to be mainly structural. The best producing wells in the survey are associated with structural or stratigraphic relief and the thickest Entrada/Curtis intervals. Structural and stratigraphic traps are not always associated with laterally extensive dune complexes. Time structure maps as well as isochron maps have proven useful in delineating the thickest and/or gas prone portions of the Entrada/Curtis interval as well as areas with structural and stratigraphic relief. We have observed that the zones of best production are associated with low gamma ray (40-60 API) values. These low values are associated with zones of high amplitude. Thus, max peak amplitude as a seismic attribute may delineate areas of higher sand content (i.e. dune complexes) whereas zones of low amplitude may represent areas of lower sand content (i.e. muddier interdune or tidal flat facies). Lack of significant average porosity does not seem to be related to a lack of production. In fact, the best producing wells have been drilled in Entrada/Curtis intervals where average porosity is near 4 %. There are however zones within the upper portion of the Entrada/Curtis that are 40 ft. (12.2 m) thick and have porosities between 14% and 20%. By combining derived attribute maps with observed cross plot relationships, it appears that the best producing intervals within the Entrada/Curtis are those associated with high amplitudes, API values from 40-60 and structural relief.
329

Study of bitwise operations on non-scarce attribute based data structures in PostgreSQL

Eschmann, Marcel January 2018 (has links)
This report investigates the viability of bitwise operations on non-scarce attribute based data structures in PostgreSQL. For applications where computation can’t be avoided, it most probably can be optimized. In an attempt of bringing the computation closer to hardware and the underlying data, operations directly on the database system are explored, taking inspiration from the research field of comparative genomics. With the case-study of an online job platform in mind, where possible matchings between candidate and job descriptions are calculated by a matching engine, a binary encoding is proposed and the computational components identified. The ultimate goal was to evaluate the scalability of the bitwise strategy with respect to the current matching engine. Through an iterative approach, this report conducts quantitative experiments on the presented components. Most notably, an implementation of the population count in the form of a C extension was introduced. It was found, that even for large sequence lengths, the operation is highly efficient. Among the chosen algorithms Lookup Table, Hamming Weight, Intrinsic functions and Unrolled Inline Assembly, the 64 bit intrinsic function displayed the best performance. Benchmarks determined, that the proposed bitwise approach is an excellent strategy for the outlined use-case. Despite the tradeoff of additional complexity in the encoding and decoding of data, the speedup is so significant, that the targeted user base of 100000 can easily be managed and allows for the deprecation of caching mechanisms. / Denna rapport undersöker gångbarheten för bitwise-operationer på icke-knappa attributbaserade datastrukturer i PostgreSQL. För applikationer där komputationen inte kan undvikas, kan den högst troligen optimeras. I ett försök att föra beräkningen närmare hårdvaran och den underliggande datan, undersöks operationer direkt på databasen med inspiration från forskningsområdet inom komparativ genomik. Med fallstudien av en online rekryteringsplattform i åtanke, där möjliga matchningar mellan kandidatoch arbetsbeskrivningar beräknas av en matchningsmotor, föreslås en binär kodning och komputationskomponenterna identifieras. Det slutgiltiga målet var att utvärdera skalbarheten hos bitwise-strategin med avseende till den aktuella matchningsmotorn. Genom ett iterativ tillvägagångssätt utför denna rapport kvantitativa experiment på de presenterade komponenterna. Framför allt infördes en implementering av population count i form av ett C-tillägg i databasen. Det visade sig att även för större sekvenslängder är operationen mycket effektiv. Bland de utvalda algoritmerna Lookup Table, Hamming Weight, Intrinsic-funktioner och Unrolled Inline Assembly, visade 64-bitars Intrisicfunktionen den bästa prestandan. Experimenten fastställde att det föreslagna bitwisetillvägagångssättet är en utmärkt strategi för den valda fallstudien. Trots avvägningen med ytterligare komplexitet vid kodning och avkodning av data är hastigheten så signifikant att ett användarantal på 100000 enkelt kan hanteras och möjliggör uteslutning av cache-mekanismer.
330

Simulated Annealing-based Multilink Selection Algorithm in SDN-enabled Avionic Networks

Luong, Doanh K., Ali, Muhammad, Li, Jian-Ping, Asif, Rameez, Abdo, K. 03 November 2021 (has links)
Yes / In this paper, a novel multilink selection framework is developed for different applications with various quality of service (QoS) requirements in avionic systems, based on the multi-attribute decisionmaking model. Two metaheuristic algorithms are proposed to solve this model while optimizing the multilink selection performances. Multilink configuration and multi-homing capabilities are generally required for aircrafts operating in a heterogeneous wireless network environment. The first algorithm, called Analytic Hierarchy Process and Simulated Annealing (AHP-SA), utilises a two-phase process. In Phase one, an analytic hierarchy process (AHP) is used to choose the decision weight factors. Then, in Phase two, a simulated annealing process is applied to select suitable networks, for various service requests, based on the weights obtained from first phase. Further, to improve customer satisfaction, Simulated Annealing algorithm for Simultaneous Weights and Network Selection Optimisation (SA-SWNO) is developed, in which a simulated annealing algorithm is applied to dynamically optimise weight factors of objective functions and the request-to-network assignment matrix. Simulation results demonstrates that both proposed algorithms outperform the commonly used price-based or QoS-based network selection scheme with much higher averaged satisfaction degree and lower computational complexity. / Cockpit NetwOrk CoMmunications Environment Testing (COMET) Project under the European Commission’s Program Clean Sky2 in partnership with the European Aeronautical Industry

Page generated in 0.0758 seconds