Spelling suggestions: "subject:"collision"" "subject:"kollision""
561 |
Improvement to Highway Safety through Network Level Friction Testing and Cost Effective Pavement MaintenanceAbd El Halim, Amir, Omar January 2010 (has links)
Pavements encompass a significant component of the total civil infrastructure investment. In Ontario, the Ministry of Transportation (MTO) is responsible for the maintenance and construction of approximately 39,000 lane-kilometres of highway. In 2004, the province estimated the value of the total highway system at $39 billion dollars. Thus, managing this asset is an important factor to ensure a high level of service to the traveling public. One of the most important indicators of level of service for a road network is safety. Each year, thousands of motorists across North America are involved in motor vehicle collisions, which result in property damage, congestion, delays, injuries and fatalities. The MTO estimated that in 2002, vehicle collisions in Ontario cost nearly $11 billion.
Despite the importance of highway safety, it is usually not considered explicitly in the pavement management framework or maintenance analysis. A number of agencies across North America collect skid data to assess the level of safety at both the project and network level (Li et al, 2004). However, a number of transportation agencies still do not collect friction data as part of their regular pavement data collection programs. This is related to both liability concerns and lack of knowledge for how this data can be effectively used to improve safety. The transportation industry generally relies on information such as collision rates, black-spot locations and radius of curvature to evaluate the level of safety of an alignment (Lamm et al., 1999). These are important factors, but the use of complementary skid data in an organized proactive manner would also be beneficial.
In preparation for a considered Long Term Area Maintenance Contract, a project was initiated by the MTO to collect network level friction data across three regions in the Province of Ontario. This project represents the first time friction data was collected at the network level in Ontario. In 2006, approximately 1,800 km of the MTO highway network was surveyed as a part of this study. This research utilized the network level skid data along with collision data to examine the relationships and model the impacts of skid resistance on the level of safety. Despite the value of collecting network level skid data, many Canadian transportation agencies still do not collect network level skid data due to the costs and potential liability associated with the collected data.
The safety of highway networks are usually assessed using various levels of service indicators such as Wet-to-Dry accident ratio (W/D), surface friction (SN), or the collision rate (CR). This research focused on developing a framework for assessing the level of safety of a highway network in terms of the risk of collision based on pavement surface friction. The developed safety framework can be used by transportation agencies (federal, state, provincial, municipal, etc.) or the private sector to evaluate the safety of their highway networks and to determine the risk or probability of a collision occurring given the level of friction along the pavement section of interest. As a part of the analysis, a number of factors such as Region, Season of the Year, Environmental Conditions, Road Surface Condition, Collision Severity, Visibility and Roadway Location were all investigated. Statistical analysis and modeling were performed to developed relationships which could relate the total number of collisions or the collision rate (CR) to the level of available pavement friction on a highway section. These models were developed using over 1,200 collisions and skid test results from two Regions in the Province of Ontario. Another component of this study examined the Wet-to-Dry accident ratio and compared it to the Skid Number. A number of Transportation Agencies rely on the Wet-to-Dry accident ratio to identify potential locations with poor skid resistance. The results of the comparison further demonstrated the need and importance of collecting network level skid data.
Another component of this study was to evaluate the effectiveness of various preservation treatments used within the Long Term Pavement Performance (LTPP) study. In addition, modeling was performed which examined the historical friction trends over time within various environment zones across North America to investigate skid resistance deterioration trends. The results of the analysis demonstrated that commonly used preservation treatments can increase skid resistance and improve safety.
The cost effectiveness of implementing preservation and maintenance to increase the level of safety of a highway using Life Cycle Cost Analysis (LCCA) was evaluated. A Decision Making Framework was developed which included the formulation of a Decision Matrix that can be used to assist in selecting a preservation treatment for a given condition. The results of this analysis demonstrate the savings generated by reducing the number of collisions as a result of increasing skid resistance.
The results of this research study have demonstrated the importance of network level friction testing and the impact of skid resistance on the level of safety of a highway. A review of the literature did not reveal any protocol or procedures for sampling or minimum test interval requirements for network level skid testing using a locked-wheel tester. Network level friction testing can be characterized as expensive and time-consuming due to the complexity of the test. As a result, any reduction in the required number of test points is a benefit to the transportation agency, private sector (consultants and contractors) and most importantly, the public. An analysis approach was developed and tested that can be used to minimize the number of required test locations along a highway segment using common statistical techniques.
|
562 |
Susidūrimų paieškos, naudojant lygiagrečius skaičiavimus, metodų tyrimas / Collision detection methods using parallel computingŠiukščius, Martynas 26 August 2013 (has links)
Susidūrimų paieška - tai dviejų ar daugiau objektų susikirtimo radimas. Praktikoje susidūrimų paieška taikoma šiose srityse: kompiuteriniuose žaidimuose, netiesinėje baigtinių elementų analizėje, dalelių hidrodinamikoje, daugiafunkcinės dinamikos analizėje, įvairiose fizikos simuliacijose ir kt. Egzistuoja daugybė susidūrimų paieškos algoritmų, iš kurių populiariausi yra erdvinio skaidymo, hierarchinio struktūrizavimo ir atrinkimo bei rūšiavimo metodai. Šiame darbe yra tiriamas šių algoritmų veikimas ant CPU (Central processing unit) ir ant GPU (Graphics processing unit), analizuojami susidūrimų paieškos nustatymo būdai bei nagrinėjamos pasirinktų algoritmų veikimo spartinimo galimybės panaudojant CUDA (Compute Unified Device Architecture) technologiją. Ši technologija yra Nvidia sukurta nauja duomenų apdorojimo architektūra išnaudojanti grafinio procesoriaus resursus bendro pobūdžio skaičiavimams. Darbe iškeltų tikslų pasiekimui yra realizuotos kelios bazinės algoritmų versijos, jų pritaikymo lygiagretiems skaičiavimams galimybės ir taip pat atliekami bazinių algoritmų laiko, reikalingo skaičiavimams atlikti, grafinio procesoriaus atminties sąnaudos bei įvairių veikimo laiką įtakojančių faktorių tyrimai. Darbo pabaigoje aptariami lygiagretaus programavimo privalumai pritaikant nagrinėjamai temai. Šiame darbe atlikti tyrimai parodė, jog perduodant skaičiavimus į GPU pasiekiamas 200 kartų didesnis nagrinėjamų algoritmų našumas negu atliekant skaičiavimus naudojant CPU. / Collision detection is a well-studied and active research field where the main problem is to determine if one or more objects collide with each other in 3D virtual space. Collision detection is an issue affecting many different fields of study, including computer animation, physical-based simulation, robotics, video games and haptic applications. There is a big variety of collision detection algorithms of witch spatial subdivision, octree and sort and sweep are three of them. In this document we provide a short summary of collision detection algorithms, but the main focus will be on analyzing and increasing their performance working on CPU (orig. Central processing unit) and GPU (orig. Graphics processing unit) separately by making use of CUDA (orig.Compute Unified Device Architecture) technology. This technology is a part of Nvidia, witch helps the use of graphics processor for general-purpose computation. Main goal of this research is achieved by performing analysis of implemented spatial subdivision, octree and sort and sweep algorithms. This analysis consists of both general performance, parallelization performance and various performance affecting factors analyses. At the end of the document, the advantages of parallel programming adapted to the present subject are discussed.
|
563 |
A case study determining the relevance of motor body repairs focusing on niche markets outside the insurance industry, to establish a position of competitive advantage.Winter, Brett. January 2002 (has links)
When one thinks of motor vehicle accident damage repairs, one often thinks of unscrupulous operators and a scurrilous industry. While this is regrettably often the case, there is a counterpoint, being the significant number of motor body repair firms that have invested significant sums in establishing accredited and certified motor body repair outlets, and who offer a premium service. The industry is one that is regulated by the South African Motor Body Repair Association, a body that seeks to dictate a standard of repairs by dictating membership eligibility relative to investment in equipment. Most unfortunately, this stipulation does not adequately take into account the flow of work that there may be from the motor vehicle insurance industry, and many repairers find themselves having to resort to nefarious means to ensure that business comes their way. The author of this report is a co-owner of an advanced major structural motor body repairer. Rather than stooping to unethical practices, the owners have sought to undertake a position appraisal and gap analysis with the intention of uncovering the strategic alternatives available to their firm. The firm has implemented the strategic choices highlighted by this report to good effect and has enjoyed enhanced revenue streams and business competitiveness as a result of undertaking this exercise. This report serves to document the highlights of that process. / Thesis (MBA)-University of Natal, Durban, 2002.
|
564 |
Adaptive Bounding Volume Hierarchies for Efficient Collision QueriesLarsson, Thomas January 2009 (has links)
The need for efficient interference detection frequently arises in computer graphics, robotics, virtual prototyping, surgery simulation, computer games, and visualization. To prevent bodies passing directly through each other, the simulation system must be able to track touching or intersecting geometric primitives. In interactive simulations, in which millions of geometric primitives may be involved, highly efficient collision detection algorithms are necessary. For these reasons, new adaptive collision detection algorithms for rigid and different types of deformable polygon meshes are proposed in this thesis. The solutions are based on adaptive bounding volume hierarchies. For deformable body simulation, different refit and reconstruction schemes to efficiently update the hierarchies as the models deform are presented. These methods permit the models to change their entire shape at every time step of the simulation. The types of deformable models considered are (i) polygon meshes that are deformed by arbitrary vertex repositioning, but with the mesh topology preserved, (ii) models deformed by linear morphing of a fixed number of reference meshes, and (iii) models undergoing completely unstructured relative motion among the geometric primitives. For rigid body simulation, a novel type of bounding volume, the slab cut ball, is introduced, which improves the culling efficiency of the data structure significantly at a low storage cost. Furthermore, a solution for even tighter fitting heterogeneous hierarchies is outlined, including novel intersection tests between spheres and boxes as well as ellipsoids and boxes. The results from the practical experiments indicate that significant speedups can be achieved by using these new methods for collision queries as well as for ray shooting in complex deforming scenes.
|
565 |
Geometric operators for motion planningHimmelstein, Jesse 19 September 2008 (has links) (PDF)
La planification du mouvement connait une utilisation croissante dans le contexte industriel. Qu'elle soit destinée à la programmation des robots dans l'usine ou au calcul de l'assemblage d'une pièce mécanique, la planification au travers des algorithmes probabilistes est particulièrement efficace pour résoudre des problèmes complexes et difficiles pour l'opérateur humain. Cette thèse CIFRE, effectuée en collaboration entre le laboratoire de recherche LAAS-CNRS et la jeune entreprise Kineo CAM, s'attache à résoudre la problématique de planification de mouvement dans l'usine numérique. Nous avons identifié trois domaines auxquels s'intéressent les partenaires industriels et nous apportons des contributions dans chacun d'eux: la détection de collision, le volume balayé et le mouvement en collision. La détection de collision est un opérateur critique pour analyser des maquettes numériques. Les algorithmes de planification de mouvement font si souvent appel à cet opérateur qu'il représente un point critique pour les performances. C'est pourquoi, il existe une grande variété d'algorithmes spécialisés pour chaque type de géométries possibles. Cette diversité de solutions induit une difficulté pour l'intégration de plusieurs types de géométries dans la même architecture. Nous proposons une structure algorithmique rassemblant des types géométriques hétérogènes pour effectuer les tests de proximité entre eux. Cette architecture distingue un noyau algorithmique commun entre des approches de division de l'espace, et des tests spécialisés pour un couple de primitives géométriques donné. Nous offrons ainsi la possibilité de facilement ajouter des types de données nouveaux sans pénaliser la performance. Notre approche est validée sur un cas de robot humanoïde qui navigue dans un environnement inconnu grâce à la vision. Concernant le volume balayé, il est utilisé pour visualiser l'étendue d'un mouvement, qu'il soit la vibration d'un moteur ou le geste d'un mannequin virtuel. L'app roche la plus innovante de la littérature repose sur la puissance du matériel graphique pour approximer le volume balayé très rapidement. Elle est toutefois limitée en entrée à un seul objet, qui lui-même doit décrire un volume fermé. Afin d'adapter cet algorithme au contexte de la conception numérique, nous modifions son comportement pour traiter des " soupes de polygones " ainsi que des trajectoires discontinues. Nous montrons son efficacité sur les mouvements de désassemblage pour des pièces avec un grand nombre de polygones. Une soupe de polygones est plus difficile à manipuler qu'un volume bien formé. Le calcul du volume balayé introduit des opérateurs d'agrandissement et de rétrécissement des objets discrétisés. Le rétrécissement peut être utilisé pour d'autres applications dans la planification du mouvement à condition que la topologie de l'objet soit conservée pendant la transformation. Afin de préserver celle-ci, nous définissons le calcul du squelette qui préserve l'équivalence topologique. En gardant le squelette, nous employons l'opérateur de rétrécissement pour chercher les passages étroits des problèmes difficiles de planification de mouvement. Enfin, nous abordons le problème de la planification de mouvement en collision. Cette antilogie exprime la capacité d'autoriser une collision bornée pendant la recherche de trajectoire. Ceci permet de résoudre certains problèmes d'assemblage très difficiles. Par exemple, lors du calcul des séquences de désassemblage, il peut être utile de permettre à des "pièces obstacles" telles que les vis de se déplacer pendant la planification. De plus, en autorisant la collision, nous sommes capables de résoudre des problèmes de passage en force. Cette problématique se pose souvent dans la maquette numérique où certaines pièces sont " souples " ou si le problème consiste à identifier la trajectoire "la moins pire" quand aucun chemin sans collision n'existe. Nous apportons dans ce travail plusieurs contributions qui s'appliq uent à la conception numérique pour la robotique industrielle. Nous essayons de marier une approche scientifique avec des critères de fonctionnalités strictes pour mieux s'adapter aux utilisateurs de la conception numérique. Nous cherchons à exposer les avantages et les inconvénients de nos approches tout au long du manuscrit.
|
566 |
Stabilisation et régulation de robots mobiles opérant en groupeEl Kamel, Mohamed Anouar 30 May 2012 (has links) (PDF)
Pour les systèmes de commande sous la forme de dx/dt = f (x, u), dans la littérature, les chercheurs s'intéressaient à la stabilisation de ce système de différentes manières : asymptotique, uniformément asymptotique, partielle, en temps fini, etc. Pour aboutir à ces résultats, les méthodes utilisées font appel aux techniques suivantes : Lyapunov, Lasalle, Barbalat, surface glissante, etc. Dans cette thèse, nous nous sommes intéressés à une autre fonctionnalité de la commande, dite commande répulsive stabilisante. Les résultats ont été généralisés au cas d'un système avec dérive et sans dérive. Comme résultat, l'approche de commande qu'on propose assure la stabilité du système autour d'une position désirée et la répulsion de celui-ci par rapport à un ensemble indésirable, construit dans l'espace de navigation. Toute forme d'application sera concernée par nos résultats théoriques, on peut citer, la navigation terrestre et aérienne dans un environnement peu ou pas connu. De même, la commande qu'on propose préserve la communication inter-agent, une fois planifiées. En terme d'application, on a considéré le modèle d'un véhicule à roues type unicycle, sans tenir compte de l'orientation (cas non holonôme) et dans le cas où l'environnement contient un ou plusieurs obstacle(s). Contrairement aux résultats de la littérature, qui sont basés sur une commande à structure variable pour l'évitement d'un obstacle, la commande répulsive-stabilisante trouvée est une commande continue sur l'espace de navigation. La deuxième partie de cette thèse traite le problème de stabilité d'une formation d'agents (système multi-véhicules) qui évolue dans un environnement hostile tout en préservant la communication entre les agents. Pour réussir la formation, la décentralisation de la commande par rapport aux agents est rendue robuste à travers des graphes de communication. Ces graphes relèvent de la stratégie et objectifs de la formation. Nos résultats de stabilité ont fait l'objet d'une implémentation rigoureuse sur un simulateur réalisé sous Matlab.
|
567 |
Nonadiabatic quantum molecular dynamics with hopping, II. Role of nuclear quantum effects in atomic collisionsFischer, Michael, Handt, Jan, Schmidt, Rüdiger 09 September 2014 (has links) (PDF)
An extension of the nonadiabatic quantum molecular dynamics approach is presented to account for electron-nuclear correlations in the dynamics of atomic many-body systems. The method combines electron dynamics described within time-dependent density-functional or Hartree-Fock theory with trajectory-surface-hopping dynamics for the nuclei, allowing us to take into account explicitly a possible external laser field. As a case study, a model system of H++H collisions is considered where full quantum-mechanical calculations are available for comparison. For this benchmark system the extended surface-hopping scheme exactly reproduces the full quantum results. Future applications are briefly outlined.
|
568 |
Comparison of dilepton events in simulation and $pp$ collision data at $\sqrt s = 8\tev$ gathered by the ATLAS detector at the LHCIsacson, Max January 2014 (has links)
This thesis presents the results of a comparison between collision data and simulations based on Monte Carlo methods. The experimental dataset consists of $20.3\,\mathrm{fb}^{-1}$ of proton-proton collision data at $\sqrt s = 8\tev$ collected during 2012 by the ATLAS experiment located at the Large Hadron Collider. The final state used is $e\mu + \mathrm{jets}$. Four regions are defined, pretag ($\geq0$ jets, $\geq0$ $b$-jets), $\geq1$-tag ($\geq1$ jets, $\geq1$ $b$-jets), $\geq2$-jet ($\geq2$ jets, $\geq0$ $b$-jets), and 2-tag ($\geq2$ jets, 2 $b$-jets). Data and simulations are consistent in all regions considered.
|
569 |
Multi-antenna physical layer models for wireless network designShekhar, Hemabh 15 January 2008 (has links)
In this thesis, CMs of linear and non-linear multiple antenna receivers, in particular linear minimum mean squared error (LMMSE) and LMMSE with decision feedback (LMMSE-DF), are developed. To develop these CMs, first a simple analytical expression of the distribution of the post processing signal to interference and noise (SINR) of an LMMSE receiver is developed. This expression is then used to develop SINR- and ABER-based CMs. However, the analytical forms of these CMs are derived only for the following scenarios: (i) any number of receive antennas with three users having arbitrary received powers and (ii) two antenna receiver with arbitrary number of equal received power users. For all the other scenarios a semi-analytical CM is used.
The PHY abstractions or CMs are next used in the evaluation of a random access cellular network and an ad hoc network. Analytical model of the random access cellular network is developed using the SINR- and ABER-based CM of the LMMSE receiver. The impact of receiver processing is measured in terms of throughput. In this case, the random access mechanism is modeled by a single channel S-Aloha channel access scheme. Another analytical model is developed for single and multi-packet reception in a multi-channel S-Aloha channel access. An emph{ideal} receiver is modeled in this case, i.e. the packet(s) are successfully received as long as the total number of colliding packets is not greater than the number of antennas. Throughput and delay are used as performance metrics to study the impact of different PHY designs.
Finally, the SINR-based semi-analytical CMs of LMMSE and LMMSE-DF are used to evaluate the performance of multi-hop ad hoc networks. Throughput is used as the performance evaluation metric. A novel MAC, called S-MAC, is proposed and its performance is compared against another MAC for wireless networks, called CSMA/CA(k).
|
570 |
Theoretical study of atomic processes and dynamics in ultracold plasmasBalaraman, Gouthaman S. 17 November 2008 (has links)
In the last decade, ultracold plasmas have been created in the laboratory by photo-ionizing laser-cooled atoms. To understand the overall dynamics of ultracold plasmas, one needs to understand Rydberg collisional processes at ultracold temperatures. The two kinds of problems addressed in this thesis are: study of Rydberg atomic processes at ultracold temperatures, and a study of the overall dynamics of
the ultracold plasmas.
Theoretical methods based on quantal-classical correspondence is used to understand Rydberg atomic processes such as radiative cascade, and radiative recombination. A simulation method suitable for ultracold collisions is developed and tested.
This method is then applied to study collisional-Stark mixing in Rydberg atoms.
To study the dynamics of the ultracold plasmas, a King model for the electrons in plasmas is proposed. The King model is a stationary, finite sized electron distribution for the electrons in a cloud of fixed ions with a Gaussian distribution. A Monte-Carlo method is developed to simulate the overall dynamics of the King distribution.
|
Page generated in 0.0831 seconds