• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 3
  • 1
  • Tagged with
  • 22
  • 22
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Detecting Object Position Using Sensor Data

CHAMAKURA, VENKATA NAGA KRISHNA VAMSI, MALLULA, VAMSHHI January 2022 (has links)
This report deals with detecting object position using sensor datausing three different types of sensors held at four corners of a givendimensional plate; two sensors of URM09 Analog, 1 of URM09 Ultrasonic and 1 of VL6180X ToF sensor. The accuracy of the sensors performance is investigated using relative standard deviation. The results show that the proposed solution allows to estimate the objectposition and size without significant error.
12

The Accuracy of River Bed Sediment Samples

Petrie, John Eric 19 January 1999 (has links)
One of the most important factors that influences a stream's hydraulic and ecological health is the streambed's sediment size distribution. This distribution affects streambed stability, sediment transport rates, and flood levels by defining the roughness of the stream channel. Adverse effects on water quality and wildlife can be expected when excessive fine sediments enter a stream. Many chemicals and toxic materials are transported through streams by binding to fine sediments. Increases in fine sediments also seriously impact the survival of fish species present in the stream. Fine sediments fill tiny spaces between larger particles thereby denying fish embryos the necessary fresh water to survive. Reforestation, constructed wetlands, and slope stabilization are a few management practices typically utilized to reduce the amount of sediment entering a stream. To effectively gauge the success of these techniques, the sediment size distribution of the stream must be monitored. Gravel bed streams are typically stratified vertically, in terms of particle size, in three layers, with each layer having its own distinct grain size distribution. The top two layers of the stream bed, the pavement and subpavement, are the most significant in determining the characteristics of the stream. These top two layers are only as thick as the largest particle size contained within each layer. This vertical stratification by particle size makes it difficult to characterize the grain size distribution of the surface layer. The traditional bulk or volume sampling procedure removes a specified volume of material from the stream bed. However, if the bed exhibits vertical stratification, the volume sample will mix different populations, resulting in inaccurate sample results. To obtain accurate results for the pavement size distribution, a surface oriented sampling technique must be employed. The most common types of surface oriented sampling are grid and areal sampling. Due to limitations in the sampling techniques, grid samples typically truncate the sample at the finer grain sizes, while areal samples typically truncate the sample at the coarser grain sizes. When combined with an analysis technique, either frequency-by-number or frequency-by-weight, the sample results can be represented in terms of a cumulative grain size distribution. However, the results of different sampling and analysis procedures can lead to biased results, which are not equivalent to traditional volume sampling results. Different conversions, dependent on both the sampling and analysis technique, are employed to remove the bias from surface sample results. The topic of the present study is to determine the accuracy of sediment samples obtained by the different sampling techniques. Knowing the accuracy of a sample is imperative if the sample results are to be meaningful. Different methods are discussed for placing confidence intervals on grid sample results based on statistical distributions. The binomial distribution and its approximation with the normal distribution have been suggested for these confidence intervals in previous studies. In this study, the use of the multinomial distribution for these confidence intervals is also explored. The multinomial distribution seems to best represent the grid sampling process. Based on analyses of the different distributions, recommendations are made. Additionally, figures are given to estimate the grid sample size necessary to achieve a required accuracy for each distribution. This type of sample size determination figure is extremely useful when preparing for grid sampling in the field. Accuracy and sample size determination for areal and volume samples present difficulties not encountered with grid sampling. The variability in number of particles contained in the sample coupled with the wide range of particle sizes present make direct statistical analysis impossible. Limited studies have been reported on the necessary volume to sample for gravel deposits. The majority of these studies make recommendations based on empirical results that may not be applicable to different size distributions. Even fewer studies have been published that address the issue of areal sample size. However, using grid sample results as a basis, a technique is presented to estimate the necessary sizes for areal and volume samples. These areal and volume sample sizes are designed to match the accuracy of the original grid sample for a specified grain size percentile of interest. Obtaining grid and areal results with the same accuracy can be useful when considering hybrid samples. A hybrid sample represents a combination of grid and areal sample results that give a final grain size distribution curve that is not truncated. Laboratory experiments were performed on synthetic stream beds to test these theories. The synthetic stream beds were created using both glass beads and natural sediments. Reducing sampling errors and obtaining accurate samples in the field are also briefly discussed. Additionally, recommendations are also made for using the most efficient sampling technique to achieve the required accuracy. / Master of Science
13

Dealing with Network Partitions and Mergers in Structured Overlay Networks

Shafaat, Tallat Mahmood January 2009 (has links)
<p>Structured overlay networks form a major classof peer-to-peer systems, which are touted for their abilitiesto scale, tolerate failures, and self-manage. Any long livedInternet-scale distributed system is destined to facenetwork partitions. Although the problem of network partitionsand mergers is highly related to fault-tolerance andself-management in large-scale systems, it has hardly beenstudied in the context of structured peer-to-peer systems.These systems have mainly been studied under churn (frequentjoins/failures), which as a side effect solves the problemof network partitions, as it is similar to massive nodefailures. Yet, the crucial aspect of network mergers has beenignored. In fact, it has been claimed that ring-based structuredoverlay networks, which constitute the majority of thestructured overlays, are intrinsically ill-suited for mergingrings. In this thesis, we present a number of research papers representing our work on handling network partitions and mergers in structured overlay networks. The contribution of this thesis is threefold. First, we provide a solution for merging ring-based structured overlays. Our solution is tuneable, by a {\em fanout} parameter, to achieve a trade-off between message and time complexity. Second, we provide a network size estimation algorithm for ring-based structured overlays. We believe that an estimate of the current network size can be used for tuning overlay parameters that change according to the network size, for instance the fanout parameter in our merger solution.Third, we extend our work from fixing routing anomalies to achieving data consistency. We argue that decreasing lookup inconsistencies on the routing level aids in achieving data consistency in applications built on top of overlays. We study the frequency of occurence of lookup inconsistencies and discuss solutions to decrease the affect of lookup inconsistencies.</p>
14

A Comparision Of Object Oriented Size Evaluation Techniques

Sirakaya, Hatice Sinem 01 January 2003 (has links) (PDF)
Popular Object Oriented size metrics and estimation methods are examined. A case study is conducted. Five of the methods (&ldquo / LOC&rdquo / , &ldquo / OOPS&rdquo / , &ldquo / Use Case Points Method&rdquo / , &ldquo / J.Kammelar&rsquo / s Sizing Approach&rdquo / and &ldquo / Mark II FP&rdquo / ) are applied to a project whose requirements are defined by means of use cases. Size and effort estimations are made and compared with the actual results of the project.
15

Package size estimation using mobile devices

Gildebrand, Anton January 2021 (has links)
In the last fifteen years, the use of smartphones has exploded and almost everyone in the Nordic countries owns a smartphone that they use for everyday matters. With the rise of popularity in the usage of smartphones and not least their technical development, the number of applications to use them continues to increase. One area that smartphones can be used for is virtual reality (VR) and as this area has become more popular, the technology behind VR has become more and more sophisticated. Nowadays many smartphones are equipped with multiple cameras and LiDAR sensors that can be used by the device to create a virtual model of the physical environment. In this project, different methods were evaluated to use this virtual model to estimate the size of physical packages to add functionality to the PostNord consumer app for measuring packages when purchasing postage.
16

Mission Concept for a Satellite Mission to Test Special Relativity

Anadol, Volkan January 2016 (has links)
In 1905 Albert Einstein developed the theory of Special Relativity. This theory describes the relation between space and time and revolutionized the understanding of the universe. While the concept is generally accepted new experimental setups are constantly being developed to challenge the theory, but so far no contradictions have been found. One of the postulates Einsteins theory of Relativity is based on states that the speed of light in vacuum is the highest possible velocity. Furthermore, it is demanded that the speed of light is independent of any chosen frame of reference. If an experiment would find a contradiction of these demands, the theory as such would have to be revised. To challenge the constancy of the speed of light the socalled Kennedy Thorndike experiment has been developed. A possible setup to conduct a Kennedy Thorndike experiment consists of comparing two independent clocks. Likewise experiments have been executed in laboratory environments. Within the scope of this work, the orbital requirements for the first space-based Kennedy Thorndike experiment called BOOST will be investigated.BOOST consists of an iodine clock, which serves as a time reference, and an optical cavity, which serves as a length reference. The mechanisms of the two clocks are different and can therefore be employed to investigate possible deviations in the speed of light. While similar experiments have been performed on Earth, space offers many advantages for the setup. First, one orbit takes roughly 90 min for a satellite based experiment. In comparison with the 24 h duration on Earth it is obvious that a space-based experiment offers higher statistics. Additionally the optical clock stability has to be kept for shorter periods, increasing the sensitivity. Third, the velocity of the experimental setup is larger. This results in an increased experiment accuracy since any deviation in the speed of light would increase with increasing orbital velocity. A satellite planted in a Low Earth Orbit (LEO) travels with a velocity of roughly 7 km/s. Establishing an Earth-bound experiment that travels with a constant velocity of that order is impossible. Finally, space offers a very quiet environment where no disturbances, such as vibrations, act upon the experiment, which is practically unavoidable in a laboratory environment. This thesis includes two main chapters. The chapter titled "Mission Level" exploits orbital candidates. Here, possible orbits are explained in detail and the associated advantages and problems are investigated. It also contains a discussion about ground visibility and downlink feasibility for each option. Finally, a nominal mission scenario is sketched. The other chapter is called "Sub-Systems". Within this chapter the subsystems of the spacecraft are examined. To examine the possible orbits it is necessary to define criteria according to which the quality of the orbits can be determined. The first criterion reflects upon the scientific outcome of the mission. This is mainly governed by the achievable velocity and the orbital geometry. The second criterion discriminates according to the mission costs. These include the launch, orbital injection, de-orbiting, satellite development, and orbital maintenance. The final criteria defines the requirements in terms of mission feasibility and risks, e.g. radiation. The criteria definition is followed by explaining the mission objectives and requirements. Each requirement is then discussed in terms of feasibility. The most important parameters, such as altitude, inclination, and the right ascension of the ascending node (RAAN), are discussed for each orbital option and an optimal range is picked. The optimal altitude depends on several factors, such as the decay rate, radiation concerns, experimental contributions, and eclipse duration. For the presented mission an altitude of 600 km seems to be the best fit. Alongside the optimal altitude possible de-orbiting scenarios are investigated. It is concluded that de-orbiting of the satellite is possible without any further external influence. Thus, no additional thrusters are required to de-orbit the satellite. The de-orbiting scenario has been simulated with systems tool kit (STK). From the simulation it can be concluded, that the satellite can be deorbited within 25 years. This estimation meets the requirements set for the mission. Another very important parameter is the accumulative eclipse duration per year for a given orbit. For this calculation it is necessary to know the relative positions and motion of the Earth and the Sun. From this the eclipse duration per orbit for different altitudes is gained. Ground visibilities for orbital options are examined for two possible ground stations. The theory is based on the geometrical relation between the satellite and the ground stations. The results are in an agreement with the related STK simulations. Finally, both ground stations are found adequate to maintain the necessary contact between the satellite and the ground station. In the trade-off section, orbit candidates are examined in more detail. Results from the previous sections with some additional issues such as the experiment sensitivities, radiation concern and thermal stability are discussed to conclude which candidate is the best for the mission. As a result of the trade-off, two scenarios are explained in the "Nominal Mission Scenario" section which covers a baseline scenario and a secondary scenario. After selecting a baseline orbit, two sub-systems of the satellite are examined. In the section of "Attitude Control System (ACS)" where the question of "Which attitude control method is more suitable for the mission?" is tried to be answered. A trade-off among two common control methods those are 3-axis stabilization and spin stabilization is made. For making the trade-off possible external disturbances in space are estimated for two imaginary satellite bodies. Then, it is concluded that by a spin stabilization method maintaining the attitude is not feasible. Thus, the ACS should be built on the method of 3-axis stabilization. As the second sub-system the possible power system of the satellite is examined. The total size and the weight of the solar arrays are estimated for two different power loads. Then, the battery capacity which will be sufficient for the power system budget is estimated together with the total mass of the batteries. In the last section, a conclusion of the thesis work is made and the possible future works for the BOOST mission are stated.
17

Identification of Function Points in Software Specifications Using Natural Language Processing / Identification des points de fonction dans les spécifications logicielles à l'aide du traitement automatique des langues

Asadullah, Munshi 28 September 2015 (has links)
La nécessité d'estimer la taille d’un logiciel pour pouvoir en estimer le coût et l’effort nécessaire à son développement est une conséquence de l'utilisation croissante des logiciels dans presque toutes les activités humaines. De plus, la nature compétitive de l’industrie du développement logiciel rend courante l’utilisation d’estimations précises de leur taille, au plus tôt dans le processus de développement. Traditionnellement, l’estimation de la taille des logiciels était accomplie a posteriori à partir de diverses mesures appliquées au code source. Cependant, avec la prise de conscience, par la communauté de l’ingénierie logicielle, que l’estimation de la taille du code est une donnée cruciale pour la maîtrise du développement et des coûts, l’estimation anticipée de la taille des logiciels est devenue une préoccupation répandue. Une fois le code écrit, l’estimation de sa taille et de son coût permettent d'effectuer des études contrastives et éventuellement de contrôler la productivité. D’autre part, les bénéfices apportés par l'estimation de la taille sont d'autant plus grands que cette estimation est effectuée tôt pendant le développement. En outre, si l’estimation de la taille peut être effectuée périodiquement au fur et à mesure de la progression de la conception et du développement, elle peut fournir des informations précieuses aux gestionnaires du projet pour suivre au mieux la progression du développement et affiner en conséquence l'allocation des ressources. Notre recherche se positionne autour des mesures d’estimation de la taille fonctionnelle, couramment appelées Analyse des Points de Fonctions, qui permettent d’estimer la taille d’un logiciel à partir des fonctionnalités qu’il doit fournir à l’utilisateur final, exprimées uniquement selon son point de vue, en excluant en particulier toute considération propre au développement. Un problème significatif de l'utilisation des points de fonction est le besoin d'avoir recours à des experts humains pour effectuer la quotation selon un ensemble de règles de comptage. Le processus d'estimation représente donc une charge de travail conséquente et un coût important. D'autre part, le fait que les règles de comptage des points de fonction impliquent nécessairement une part d'interprétation humaine introduit un facteur d'imprécision dans les estimations et rend plus difficile la reproductibilité des mesures. Actuellement, le processus d'estimation est entièrement manuel et contraint les experts humains à lire en détails l'intégralité des spécifications, une tâche longue et fastidieuse. Nous proposons de fournir aux experts humains une aide automatique dans le processus d'estimation, en identifiant dans le texte des spécifications, les endroits les plus à même de contenir des points de fonction. Cette aide automatique devrait permettre une réduction significative du temps de lecture et de réduire le coût de l'estimation, sans perte de précision. Enfin, l’identification non ambiguë des points de fonction permettra de faciliter et d'améliorer la reproductibilité des mesures. À notre connaissance, les travaux présentés dans cette thèse sont les premiers à se baser uniquement sur l’analyse du contenu textuel des spécifications, applicable dès la mise à disposition des spécifications préliminaires et en se basant sur une approche générique reposant sur des pratiques établies d'analyse automatique du langage naturel. / The inevitable emergence of the necessity to estimate the size of a software thus estimating the probable cost and effort is a direct outcome of increasing need of complex and large software in almost every conceivable situation. Furthermore, due to the competitive nature of the software development industry, the increasing reliance on accurate size estimation at early stages of software development becoming a commonplace practice. Traditionally, estimation of software was performed a posteriori from the resultant source code and several metrics were in practice for the task. However, along with the understanding of the importance of code size estimation in the software engineering community, the realization of early stage software size estimation, became a mainstream concern. Once the code has been written, size and cost estimation primarily provides contrastive study and possibly productivity monitoring. On the other hand, if size estimation can be performed at an early development stage (the earlier the better), the benefits are virtually endless. The most important goals of the financial and management aspect of software development namely development cost and effort estimation can be performed even before the first line of code is being conceived. Furthermore, if size estimation can be performed periodically as the design and development progresses, it can provide valuable information to project managers in terms of progress, resource allocation and expectation management. This research focuses on functional size estimation metrics commonly known as Function Point Analysis (FPA) that estimates the size of a software in terms of the functionalities it is expected to deliver from a user’s point of view. One significant problem with FPA is the requirement of human counters, who need to follow a set of standard counting rules, making the process labour and cost intensive (the process is called Function Point Counting and the professional, either analysts or counters). Moreover, these rules, in many occasion, are open to interpretation, thus they often produce inconsistent counts. Furthermore, the process is entirely manual and requires Function Point (FP) counters to read large specification documents, making it a rather slow process. Some level of automation in the process can make a significant difference in the current counting practice. Automation of the process of identifying the FPs in a document accurately, will at least reduce the reading requirement of the counters, making the process faster and thus shall significantly reduce the cost. Moreover, consistent identification of FPs will allow the production of consistent raw function point counts. To the best of our knowledge, the works presented in this thesis is an unique attempt to analyse specification documents from early stages of the software development, using a generic approach adapted from well established Natural Language Processing (NLP) practices.
18

Dealing with Network Partitions and Mergers in Structured Overlay Networks

Shafaat, Tallat Mahmood January 2009 (has links)
Structured overlay networks form a major classof peer-to-peer systems, which are touted for their abilitiesto scale, tolerate failures, and self-manage. Any long livedInternet-scale distributed system is destined to facenetwork partitions. Although the problem of network partitionsand mergers is highly related to fault-tolerance andself-management in large-scale systems, it has hardly beenstudied in the context of structured peer-to-peer systems.These systems have mainly been studied under churn (frequentjoins/failures), which as a side effect solves the problemof network partitions, as it is similar to massive nodefailures. Yet, the crucial aspect of network mergers has beenignored. In fact, it has been claimed that ring-based structuredoverlay networks, which constitute the majority of thestructured overlays, are intrinsically ill-suited for mergingrings. In this thesis, we present a number of research papers representing our work on handling network partitions and mergers in structured overlay networks. The contribution of this thesis is threefold. First, we provide a solution for merging ring-based structured overlays. Our solution is tuneable, by a {\em fanout} parameter, to achieve a trade-off between message and time complexity. Second, we provide a network size estimation algorithm for ring-based structured overlays. We believe that an estimate of the current network size can be used for tuning overlay parameters that change according to the network size, for instance the fanout parameter in our merger solution.Third, we extend our work from fixing routing anomalies to achieving data consistency. We argue that decreasing lookup inconsistencies on the routing level aids in achieving data consistency in applications built on top of overlays. We study the frequency of occurence of lookup inconsistencies and discuss solutions to decrease the affect of lookup inconsistencies.
19

Estimation of grain sizes in a river through UAV-based SfM photogrammetry

Wong, Tyler 10 November 2022 (has links)
No description available.
20

A Monte Carlo Study to Determine Sample Size for Multiple Comparison Procedures in ANOVA

Senteney, Michael H. January 2020 (has links)
No description available.

Page generated in 0.2514 seconds