• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 2
  • 1
  • Tagged with
  • 19
  • 19
  • 7
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A comparison of whole life cycle costs of robotic, semi-automated, and manual build airport baggage handling systems

Bradley, Alexandre 05 1900 (has links)
This thesis proposes that a baggage handling system (BHS) environment can be defined and coupled to a whole life cycle cost (WLCC NPV) model. The results from specific experiments using the model can be used as the basis by which to commercially compare BHS flight build types of any capacity, and BHS geographical location. The model examined the three flight build types(i): Fully automatic build2; (ii) Semi-automatic build, and(iii); Manual build. The model has the ability to calculate a bag flow busy hour rate, and to replicate the baggage flow characteristics observed within real BHS operations. Whole life cycle costs (WLCC NPV) results are produced, and these form the basis by which the comparison of BHS types is made. An overall WLCC NPV scatter diagram was produced, which is a summation of each of the test sensitivities. The assumptions and limitations of the analysis are provided. It is proposed that the results, conclusions and recommendations shall be of value to airports, airlines, and design consultants.
12

Desenvolvimento de um sistema semi-automático para coleta e fracionamento do plâncton, medição de variáveis físicas e químicas da água e determinação do espectro de tamanho e biomassa do zooplâncton / Development of semi-automatic system for sampling and fractioning of plankton, measurement of chemical and physical variables of water, and determination of the size spectra and biomass of plankton

João Durval Arantes Junior 22 December 2006 (has links)
Um dos principais problemas relacionados aos estudos limnológicos realizados manualmente em laboratório consiste no grande esforço, tempo de análise e trabalho especializado necessário. Esses fatores limitam a quantidade de amostras possíveis de serem analisadas em um determinado estudo, já que recursos sejam eles reagentes, recursos financeiros ou tempo são limitados. No presente trabalho foi utilizado um sistema semi-automatizado de medidas de variáveis físicas e químicas da água. O sistema é composto por uma sonda multi-parâmetro (Horiba U-22) e um sistema de posicionamento global (GPS) acoplados a um micro-computador, que realizam medidas georeferenciadas em curtos intervalos de tempo, permitindo um rastreamento horizontal das características da água. Foi ainda desenvolvido um sistema semi-automático para coleta fracionada da comunidade planctônica por meio de bomba de sucção operada por bateria e filtro coletor com rede de plâncton de diferentes aberturas de malha. O material coletado foi fotografado por meio de sistema de aquisição digital de imagens (microscópio Zeiss equipado com câmera AxionCan). Neste trabalho foi produzido um software (Planktonscan) que a partir da análise das imagens capturadas permite produzir dados com estimativas das medidas e dimensões dos organismos, calcular biovolumes e, utilizando fatores de conversão, estimar os valores de biomassa. O software apresenta uma interface para identificação, calcula a densidade dos organismos e produz relatório gráfico com informações sobre os organismos individuais e sobre a comunidade. Os equipamentos e o software foram testados em análises limnológicas e amostragem de plâncton no reservatório do Monjolinho, São Carlos, SP, em dezembro de 2005. Os resultados obtidos foram comparados com os disponíveis na literatura e demonstraram a aplicabilidade do sistema. / A major problem associated with the study of planktonic communities lies on the difficulties of analyzing the collected material, a long time-consuming procedure. Biomass determination is also a step requiring great effort and is subjected to large errors. In the present work a semi-automated system for measuring physical and chemical variables in the water was developed. The system is made up by a flow-pump, a multi-parameter probe and a global positioning system coupled to a microcomputer that performs measurements at short time intervals, allowing a horizontal tracking of the water quality, in much shorter times than traditional methods. Another semi-automated device was developed for collecting separate plankton size fractions. It uses a battery operating suction-pump coupled to a filter with different mesh nets. The collected materials are then submitted to image computer acquisition (Axion Vision Zeiss System). Additionally, in this study a software was produced (Planktonscan), that taking the measures of individuals dimensions (length, width and height) calculates biovolume and using conversion factors calculate the biomass for each zooplankton organism identified in the sample. Both systems were tested, regarding the measurement of limnological variables and plankton sampling, in the Monjolinho Reservoir, SP. The performance was good, resulting in a larger number of points sampled (60) in a shorter sampling time (1 hour) than those usually required. The biomass results provided by Planktonscan software were compared to data from literature, obtained by the traditional gravimetric method for dry weight determination and also with data generated from the use of mathematical models (length dry-weight regressions) available. The results were expressed as species population densities, biomasses and size spectra, evidencing the applicability of the models here developed.
13

An analysis of the user perception of an interface enabling recognition and designation of waste on a line in dynamic images / En analys av användarperceptionen av ett gränssnitt som möjliggör igenkännande och klassificering av avfall på en lina i dynamiska bilder

Jansson, Christina January 2015 (has links)
Semi-automated processes are becoming more common within the field of waste management. They are often more efficient and improves the waste operators’ work environment in terms of security and health factors. The French company Veolia, engaged in waste management, has for these reasons developed the Tri Télé-Opéré (TTO) - a system where the direct contact between operator and waste is removed, and the waste is separated via a touch screen. The waste line is currently shown with a series of images in the system. This explorative study investigated how a representation of the line was perceived when using dynamic images on the screen. Diverse sequences with various parameters (viewpoint, direction and pace) were studied in order to observe how they supported the recognition and designation of the operators working with the system. Qualitative results from user tests performed with both novices and waste operators identified parameters that should be considered when using dynamic images of the waste line. The study shows that assembled images captured with an angle of 90°, shown in a slideshow mode are preferred. The viewpoint should furthermore be clear and detailed and the information should be displayed in a direction that corresponds to the long side of the screen, while having a size that make use of the screen surface. Moreover, the results suggest that the pace of the waste line could be increased if parameters allowing good recognition are used simultaneously. This study provides a first insight in how users perceive the TTO with dynamic images and which parameters that could facilitate the work. It is furthermore believed that a further development of this representation of information could ameliorate the future work situation for the waste operators, and thus also the efficiency of the sorting process. / Halvautomatiserade processer blir allt vanligare inom området avfallshantering. Det är ofta mer effektiva och förbättrar avfallsoperatörernas arbetsmiljö när det kommer till säkerhet- och hälsoaspekter. Det franska företaget Veolia, som arbetar med insamling och hantering av avfall, har på grund utav detta utvecklat Tri Télé-Opéré (TTO) – ett system där direktkontakten mellan sorterare och avfall har avlägsnats och avfallet istället sorteras via en pekskärm. Systemet visar för tillfället upp avfallslinan med en serie stillbilder. Denna explorativa studie har undersökt hur linan uppfattades med dynamiska bilder på skärmen. Diverse sekvenser med olika parametrar (synvinkel, riktning och hastighet) studerades för att se hur de stödde avfallsoperatörernas igenkänning och klassificering i sorteringen vid skärmen. Kvalitativa resultat från användartest utförda med både nybörjare och avfallsoperatörer identifierade parametrar som bör beaktas när en presentation av avfallslinan med dynamiska bilder används. Studien visar att ett bildspel bestående av bilder tagna med en 90° vinkel föredrogs. Synvinkeln bör vara tydlig och informationen bör visas i en riktning som överensstämmer med skärmens långsida, samt ha en storlek som tar vara på bildskärmens yta. Hastigheten på avfallslinan kan ökas om parametrar som möjliggör ett bra igenkännande används simultant. Denna studie ger en första inblick i hur användarna upplever TTO med dynamiska bilder, och vilka parametrar som skulle kunna underlätta arbetet. En vidareutveckling av denna informationspresentation kan förbättra den framtida arbetssituationen för avfallsoperatörer, och därmed också effektiviteten i sorteringsprocessen.
14

An Approach For Computing Intervisibility Using Graphical Processing U

Tracy, Judd 01 January 2004 (has links)
In large scale entity-level military force-on-force simulations it is essential to know when one entity can visibly see another entity. This visibility determination plays an important role in the simulation and can affect the outcome of the simulation. When virtual Computer Generated Forces (CGF) are introduced into the simulation these intervisibilities must now be calculated by the virtual entities on the battlefield. But as the simulation size increases so does the complexity of calculating visibility between entities. This thesis presents an algorithm for performing these visibility calculations using Graphical Processing Units (GPU) instead of the Central Processing Units (CPU) that have been traditionally used in CGF simulations. This algorithm can be distributed across multiple GPUs in a cluster and its scalability exceeds that of CGF-based algorithms. The poor correlations of the two visibility algorithms are demonstrated showing that the GPU algorithm provides a necessary condition for a "Fair Fight" when paired with visual simulations.
15

Development of an ETL-Pipeline for Automatic Sustainability Data Analysis

Janmontree, Jettarat, Mehta, Aditya, Zadek, Hartmut 14 June 2023 (has links)
As the scientific community and organizations increase their investments in sustainable development, the phrase is increasingly being used deceptively. To be sustainable, one must examine all three aspects, namely environmental, social, and economic. The release of sustainability reports has generated a vast amount of data regarding company sustainability practices. This data demands time and effort to evaluate and extract meaningful information. This research aims to create criteria that include a list of keywords for analyzing sustainability reports. Using these criteria, a proposed application based on the concepts of Extract, Transform, Load (ETL) was developed to automatize the process of data analysis. The results generated by the ETL tool can be used to conduct qualitative and quantitative assessments of the organization’s sustainability practices as well as compare the transparency in sustainability reporting across different industries.
16

Conception d’architecture de système-de-systèmes à logiciel prépondérant dirigée par les missions / Mission-driven Software-intensive System-of-Systems Architecture Design

Ferreira silva, Eduardo 17 December 2018 (has links)
La formulation des missions est le point de départ du développement de systèmes-de- systèmes, étant utilisée comme base pour la spécification, la vérification et la validation d’architectures de systèmes-de-systèmes. Élaborer des modèles d’architecture pour systèmes-de-systèmes est une activité complexe, cette complexité reposant spécialement sur les comportements émergents, c'est-à-dire, des comportements issus des interactions entre les parties constituantes d’un système-de-systèmes qui ne peuvent pas être prédits même si on connaît tous les comportements de tous les systèmes constituants. Cette thèse adresse le lien synergique entre mission et architecture dans le cadre des systèmes-de-systèmes à logiciel prépondérant, en accordant une attention particulière aux comportements émergents créés pour réaliser les missions formulées. Nous proposons ainsi une approche pour la conception d’architecture de systèmes-de-systèmes dirigée par le modèle de mission. Dans notre approche, le modèle de mission sert à dériver et à valider les architectures de systèmes-de-systèmes. Dans un premier temps, nous générons la structure de l’architecture à l’aide de transformations de modèles. Ensuite, lors que l’architecte spécifie les aspects comportementaux, la description de l’architecture résultante est validée à l’aide d’une démarche conjointe qui comprend à la fois la vérification des propriétés spécifiées et la validation par simulation des comportements émergents. La formalisation en termes de logique temporelle et la vérification statistique de modèles sont les fondements formels de l’approche. Un outil mettant en œuvre l’ensemble de l’approche a été également développé et expérimenté. / The formulation of missions is the starting point to the development of Systems-of-Systems (SoS), being used as a basis for the specification, verification and validation of SoS architectures. Specifying, verifying and validating architectural models for SoS are complex tasks compared to usual systems, the inner complexity of SoS relying specially on emergent behaviors, i.e. features that emerge from the interactions among constituent parts of the SoS which cannot be predicted even if all the behaviors of all parts are completely known. This thesis addresses the synergetic relationship between missions and architectures of software-intensive SoS, giving a special attention to emergent behaviors which are created for achieving formulated missions. We propose a design approach for the architectural modeling of SoS driven by the mission models. In our proposal, the mission model is used to both derive, verify and validate SoS architectures. As first step, we define a formalized mission model, then we generate the structure of the SoS architecture by applying model transformations. Later, when the architect specifies the behavioral aspects of the SoS, we generate concrete SoS architectures that will be verified and validated using simulation-based approaches, in particular regarding emergent behaviors. The verification uses statistical model checking to verify whether specified properties are satisfied, within a degree of confidence. The formalization in terms of a temporal logic and statistical model checking are the formal foundations of the developed approach. A toolset that implements the whole approach was also developed and experimented.
17

A comparative theoretical and empirical analysis of three methods for workplace studies

Sellberg, Charlott January 2011 (has links)
Workplace studies in Human-Computer Interaction (HCI) is a research field that has expanded in an explosive way during the recent years. Today there is a wide range of theoretical approaches and methods to choose from, which makes it problematic to make methodological choices both in research and system design. While there have been several studies that assess the different approaches to workplace studies, there seems to be a lack of studies that explore the theoretical and methodological differences between more structured methods within the research field. In this thesis, a comparative theoretical and empirical analysis of three methods for workplace studies is being conducted to deal with the following research problem: What level of theoretical depth and methodological structure is appropriate when conducting methods for workplace studies to inform design of complex socio-technical systems? When using the two criterions descriptive power and application power, to assess Contextual Design (CD), Determining Information Flow Breakdown (DIB), and Capturing Semi-Automated Decision-Making (CASADEMA), important lessons are learned about which methods are acceptable and useful when the purpose is to inform system design.
18

Automated Debugging and Bug Fixing Solutions : A Systematic Literature Review and Classification / Automated Debugging and Bug Fixing Solutions : A Systematic Literature Review and Classification

shafiq, Hafiz Adnan, Arshad, Zaki January 2013 (has links)
Context: Bug fixing is the process of ensuring correct source code and is done by developer. Automated debugging and bug fixing solutions minimize human intervention and hence minimize the chance of producing new bugs in the corrected program. Scope and Objectives: In this study we performed a detailed systematic literature review. The scope of work is to identify all those solutions that correct software automatically or semi-automatically. Solutions for automatic correction of software do not need human intervention while semi-automatic solutions facilitate a developer in fixing a bug. We aim to gather all such solutions to fix bugs in design, i.e., code, UML design, algorithms and software architecture. Automated detection, isolation and localization of bug are not in our scope. Moreover, we are only concerned with software bugs and excluding hardware and networking domains. Methods: A detailed systematic literature review (SLR) has been performed. A number of bibliographic sources are searched, including Inspec, IEEE Xplore, ACM digital library, Scopus, Springer Link and Google Scholar. Inclusion/exclusion, study quality assessment, data extraction and synthesis have been performed in depth according to guidelines provided for performing SLR. Grounded theory is used to analyze literature data. To check agreement level between two researchers, Kappa analysis is used. Results: Through SLR we identified 46 techniques. These techniques are classified in automated/semi-automated debugging and bug fixing. Strengths and weaknesses of each of them are identified, along with which types of bugs each can fix and in which language they can be implement. In the end, classification is performed which generate a list of approaches, techniques, tools, frameworks, methods and systems. Along, this classification and categorization we separated bug fixing and debugging on the bases of search algorithms. Conclusion: In conclusion achieved results are all automated/semi-automated debugging and bug fixing solutions that are available in literature. The strengths/benefits and weaknesses/limitations of these solutions are identified. We also recognize type of bugs that can be fixed using these solutions. And those programming languages in which these solutions can be implemented are discovered as well. In the end a detail classification is performed. / alla automatiska / halvautomatiska felsökning och felrättning lösningar som är tillgängliga i litteraturen. De styrkor / fördelar och svagheter / begränsningar av dessa lösningar identifieras. Vi erkänner också typ av fel som kan fastställas med hjälp av dessa lösningar. Och de programmeringsspråk där dessa lösningar kan genomföras upptäcks också. Till slut en detalj klassificering utförs / +46 763 23 93 87, +46 70 966 09 51
19

Comparing Geomorphometric Pattern Recognition Methods for Semi-Automated Landform Mapping

Hassan, Wael January 2020 (has links)
No description available.

Page generated in 0.0551 seconds