• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 144
  • 46
  • 26
  • 18
  • 10
  • 5
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 317
  • 59
  • 55
  • 52
  • 45
  • 44
  • 43
  • 39
  • 36
  • 30
  • 28
  • 28
  • 21
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

An Analysis of Smartphone Camera and Digital Camera Images Captured by Adolescents Ages Fifteen to Seventeen

Fatimi, Safia January 2021 (has links)
We have become increasingly dependent on our smartphones and use them for entertainment, navigation, to shop, and to connect among other tasks. For many, the camera on the smartphone has replaced a dedicated digital camera, especially for the adolescent. With advances in smartphone technology, it is has become increasingly difficult to determine differences between smartphone camera and digital camera photographs. To date there is little research on the differences between photographs taken by smartphone and digital cameras, particularly among adolescents, who are avid photographers.This study used a qualitative task-based research method to investigate differences in photographs taken by adolescents using both types of cameras. Twenty-three adolescents ages 15 to 17 attending a regularly scheduled high school photography class participated in the study. The students were invited to capture a typical day in their life, first using their digital camera or smartphone camera and then switching to the other type of camera. Data were collected by way of written reflections, student interviews, and the participants’ photographs. The three data sources were coded, analyzed, and triangulated to provide results for this study. Results suggest that, for these particular participants, marginal differences exist between the photographs taken with a smartphone camera and a digital camera. Analysis also suggests there were minimal differences across specific categories of focus, color balance, and thoughtfully captured images between the smartphone and the digital camera photographs for this population of students. The study concludes that teenagers ultimately use whatever capturing device is available to them, suggesting that it is the photographer who controls the quality of a photograph—not the capturing device. Educational implications of the study focus on the use of technology in the art classroom, and suggestions are offered for photographic curricula based on the results of this study. In addition, an examination of different pedagogical styles, such as reciprocal and remote teaching and learning models, finds them particularly appropriate in supporting photography education for adolescents.
232

Extreme Ultraviolet Spectral Streak Camera

Szilagyi, John Michael 01 January 2010 (has links)
The recent development of extreme ultraviolet (EUV) sources has increased the need for diagnostic tools, and has opened up a previously limited portion of the spectrum. With ultrafast laser systems and spectroscopy moving into shorter timescales and wavelengths, the need for nanosecond scale imaging of EUV is increasing. EUV’s high absorption has limited the number of imaging options due to the many atomic resonances in this spectrum. Currently EUV is imaged with photodiodes and X-ray CCDs. However photodiodes are limited in that they can only resolve intensity with respect to time and X-ray CCDs are limited to temporal resolution in the microsecond range. This work shows a novel approach to imaging EUV light over a nanosecond time scale, by using an EUV scintillator to convert EUV to visible light imaged by a conventional streak camera. A laser produced plasma, using a mass-limited tin based target, provided EUV light which was imaged by a grazing incidence flat field spectrometer onto a Ce:YAG scintillator. The EUV spectrum (5 nm-20 nm) provided by the spectrometer is filter by a zirconium filter and then converted by the scintillator to visible light (550 nm) which can then be imaged with conventional optics. Visible light was imaged by an electron image tube based streak camera. The streak camera converts the visible light image to an electron image using a photocathode, and sweeps the image across a recording medium. The streak camera also provides amplification and gating of the image by the means of a micro channel plate, within the image tube, to compensate for low EUV intensities. The system provides 42 ns streaked images of light with a iii temporal resolution of 440 ps at a repetition rate of 1 Hz. Upon calibration the EUV streak camera developed in this work will be used in future EUV development.
233

Are modern smart cameras vulnerable to yesterday’s vulnerabilities? : A security evaluation of a smart home camera / Undviker dagens smarta kameror gårdagens sårbaraheter? : Utvärdering av säkerheten hos en smart hem kamera

Larsson, Jesper January 2021 (has links)
IoT cameras can allow users to monitor their space remotely, but consumers are worried about the security implications. Their worries are neither unfounded as vulnerabilities repeatedly have been found in internet connected cameras. Have modern cameras learned from the mistakes of their predecessors? This thesis has performed a case study of a consumer smart camera popular on the Swedish market. The camera was evaluated through a pentest. The evaluation found that the camera’s cloud centric design allowed it to side step issues present in earlier models. However, it was demonstrated that it is possible to detect potentially sensitive events, e.g. when the camera notice motion, by just inspecting the amount of traffic it sends. Other tests were not able to demonstrate vulnerabilities though. Based on these findings it was concluded that the camera were more secure than it’s predecessors, which supports that the market has improved. / Konsumenter kan med IoT kameror på distans överse sin egendom. De är dock oroliga över hur säkra kamerorna är. Denna oro existerar inte utan god anledning. Sårbarheter har upprepade gånger påvisat finnas i internetuppkopplade kameror. Har dagens kameror lärt sig av deras föregångares misstag? Detta examensarbete har testat en smart kamera som är populär på den svenska marknaden. För att fastställa hur säker kameran är genomfördes ett penetrationstest. Undersökning fann att kameran lyckats kringgå tidigare vanliga sårbarheter genom att förlita sig på molnet. Undersökning kunde dock konstatera att en motståndare kan läcka potentiellt känslig information, t.ex. när kameran upptäcker rörelse, bara genom att mäta hur mycket nätverkstrafik kameran sänder. Undersökningen kunde dock inte påvisa andra sårbarheter. Baserat på dessa resultat fann studien att denna kamera är säkrare än sina föregångare, och att detta stödjer tesen att marknaden som helhet förbättrats.
234

DECENTRALIZED SUBOPTIMAL CONTROL OF INDUSTRIAL MANIPULATORS BY A COMPUTER VISION SYSTEM.

Watts, Russell Charles. January 1983 (has links)
No description available.
235

CMOS Active Pixel Sensors for Digital Cameras: Current State-of-the-Art

Palakodety, Atmaram 05 1900 (has links)
Image sensors play a vital role in many image sensing and capture applications. Among the various types of image sensors, complementary metal oxide semiconductor (CMOS) based active pixel sensors (APS), which are characterized by reduced pixel size, give fast readouts and reduced noise. APS are used in many applications such as mobile cameras, digital cameras, Webcams, and many consumer, commercial and scientific applications. With these developments and applications, CMOS APS designs are challenging the old and mature technology of charged couple device (CCD) sensors. With the continuous improvements of APS architecture, pixel designs, along with the development of nanometer CMOS fabrications technologies, APS are optimized for optical sensing. In addition, APS offers very low-power and low-voltage operations and is suitable for monolithic integration, thus allowing manufacturers to integrate more functionality on the array and building low-cost camera-on-a-chip. In this thesis, I explore the current state-of-the-art of CMOS APS by examining various types of APS. I show design and simulation results of one of the most commonly used APS in consumer applications, i.e. photodiode based APS. We also present an approach for technology scaling of the devices in photodiode APS to present CMOS technologies. Finally, I present the most modern CMOS APS technologies by reviewing different design models. The design of the photodiode APS is implemented using commercial CAD tools.
236

An F/2 Focal Reducer For The 60-Inch U.S. Naval Observatory Telescope

Meinel, Aden B., Wilkerson, Gary W. 28 February 1968 (has links)
QC 351 A7 no. 07 / The Meinel Reducing Camera for the U. S. Naval Observatory's 60-inch telescope, Flagstaff, Arizona, comprises an f /10 collimator designed by Meinel and Wilkerson, and a Leica 50-mm f/2 Summicron camera lens. The collimator consists of a thick, 5-inch field lens located close to the focal plane of the telescope, plus four additional elements extending toward the camera. The collimator has an efl of 10 inches, yielding a 1-inch exit pupil that coincides with the camera's entrance pupil, 1.558 inches beyond the final surface of the collimator. There is room between the facing lenses of the collimator and camera to place filters and a grating. The collimated light here is the best possible situation for interference filters. Problems of the collimator design work included astigmatism due to the stop's being so far outside the collimator, and field curvature. Two computer programs were used in development of the collimator design. Initial work, begun in 1964, was with the University of Rochester's ORDEALS program (this was the first time the authors had used such a program) and was continued through July, 1965. Development subsequently was continued and completed on the Los Alamos Scientific Laboratory's program, LASL. The final design, completed January 24, 1966, was evaluated with ORDEALS. This project gave a good opportunity to compare ORDEALS, an "aberration" program, with LASL, a "ray deviation" program. It was felt that LASL was the superior program in this case, and some experimental runs beginning with flat slabs of glass indicated that it could have been used for the entire development of the collimator. Calculated optical performance of the design indicated that the reducing camera should be "seeing limited" for most work. Some astigmatism was apparent, but the amount did not turn out to be harmful in actual astronomical use. After the final design was arrived at, minor changes were made to accommodate actual glass indices of the final melt, and later to accommodate slight changes of radii and thicknesses of the elements as fabricated. An additional small change in spacing between two of the elements was made at the observatory after the reducing camera had been in use for a short time. The fabricated camera is working according to expectations. Some photographs are included in the report to illustrate its performance and utility.
237

SPECIFICATIONS FOR THE CASSEGRAIN INSTRUMENTS INCLUDING THE CASSEGRAIN OBSERVING PLATFORM, STEWARD OBSERVATORY 90-INCH TELESCOPE

Bok, B. J., Fitch, W. S., Hilliard, R. L., Meinel, Aden B., Taylor, D. J., White, R. E. 02 1900 (has links)
QC 351 A7 no. 16 / This document has been prepared to form the basis for the operational specifications for the Cassegrain instrumentation for the 90-inch telescope of the Steward Observatory. The publication of this document is for the purpose of providing guidance to other astronomical groups who may have use for the considerations recorded herein.
238

Analyse d’information tridimensionnelle issue de systèmes multi-caméras pour la détection de la chute et l’analyse de la marche

Auvinet, Edouard 11 1900 (has links)
Réalisé en cotutelle avec le laboratoire M2S de Rennes 2 / Cette thèse s’intéresse à définir de nouvelles méthodes cliniques d’investigation permettant de juger de l’impact de l’avance en âge sur la motricité. En particulier, cette thèse se focalise sur deux principales perturbations possibles lors de l’avance en âge : la chute et l’altération de la marche.Ces deux perturbations motrices restent encore mal connues et leur analyse en clinique pose de véritables défis technologiques et scientifiques. Dans cette thèse, nous proposons des méthodes originales de détection qui peuvent être utilisées dans la vie courante ou en clinique, avec un minimum de contraintes techniques. Dans une première partie, nous abordons le problème de la détection de la chute à domicile, qui a été largement traité dans les années précédentes. En particulier, nous proposons une approche permettant d’exploiter le volume du sujet, reconstruit à partir de plusieurs caméras calibrées. Ces méthodes sont généralement très sensibles aux occultations qui interviennent inévitablement dans le domicile et nous proposons donc une approche originale beaucoup plus robuste à ces occultations. L’efficacité et le fonctionnement en temps réel ont été validés sur plus d’une vingtaine de vidéos de chutes et de leurres, avec des résultats approchant les 100% de sensibilité et de spécificité en utilisant 4 caméras ou plus. Dans une deuxième partie, nous allons un peu plus loin dans l’exploitation des volumes reconstruits d’une personne, lors d’une tâche motrice particulière : la marche sur tapis roulant, dans un cadre de diagnostic clinique. Dans cette partie, nous analysons plus particulièrement la qualité de la marche. Pour cela nous développons le concept d’utilisation de caméras de profondeur pour la quantification de l’asymétrie spatiale au cours du mouvement des membres inférieurs pendant la marche. Après avoir détecté chaque pas dans le temps, cette méthode réalise une comparaison de surfaces de chaque jambe avec sa correspondante symétrique du pas opposé. La validation effectuée sur une cohorte de 20 sujets montre la viabilité de la démarche. / This thesis is concerned with defining new clinical investigation method to assess the impact of ageing on motricity. In particular, this thesis focuses on two main possible disturbance during ageing : the fall and walk impairment. This two motricity disturbances still remain unclear and their clinical analysis presents real scientist and technological challenges. In this thesis, we propose novel measuring methods usable in everyday life or in the walking clinic, with a minimum of technical constraints. In the first part, we address the problem of fall detection at home, which was widely discussed in previous years. In particular, we propose an approach to exploit the subject’s volume, reconstructed from multiple calibrated cameras. These methods are generally very sensitive to occlusions that inevitably occur in the home and we therefore propose an original approach much more robust to these occultations. The efficiency and real-time operation has been validated on more than two dozen videos of falls and lures, with results approaching 100 % sensitivity and specificity with at least four or more cameras. In the second part, we go a little further in the exploitation of reconstructed volumes of a person at a particular motor task : the treadmill, in a clinical diagnostic. In this section we analyze more specifically the quality of walking. For this we develop the concept of using depth camera for the quantification of the spatial and temporal asymmetry of lower limb movement during walking. After detecting each step in time, this method makes a comparison of surfaces of each leg with its corresponding symmetric leg in the opposite step. The validation performed on a cohort of 20 subjects showed the viability of the approach.
239

THE ROLE OF PROCEDURAL JUSTICE WITHIN POLICE-CITIZEN CONTACTS IN EXPLAINING CITIZEN BEHAVIORS AND OTHER OUTCOMES

Mell, Shana M 01 January 2016 (has links)
American policing is shaped by an array of challenges. Police are expected to address crime and engage the community, yet police are held to higher expectations of accountability, effectiveness, and efficiency than ever before. Police legitimacy is the ability of the police to exercise their authority in the course of maintaining order, resolving conflicts, and solving problems (PERF, 2014). The procedural justice and police legitimacy literature suggest that by exhibiting procedurally just behaviors within police-citizen encounters, officers are considered legitimate by the public (PERF, 2014; Tyler, 2004, Tyler & Jackson, 2012). This study examines procedural justice through systematic observations of police-citizen encounters recorded by body worn cameras in one mid-Atlantic police agency. The four elements of procedural justice (participation, neutrality, dignity and respect, and trustworthiness) are assessed to examine police behavior and its outcomes. The research questions concern how police acting in procedurally just ways may influence citizen behaviors. Descriptive statistics indicate high levels of procedural justice. Regression analyses suggest that procedural justice may predict positive citizen behaviors within police-citizen encounters. This study highlights the significance of procedural justice as an antecedent to police legitimacy and offers a new mode of observation: body worn camera footage.
240

Renderização interativa de câmeras virtuais a partir da integração de múltiplas câmeras esparsas por meio de homografias e decomposições planares da cena / Interactive virtual camera rendering from multiple sparse cameras using homographies and planar scene decompositions

Silva, Jeferson Rodrigues da 10 February 2010 (has links)
As técnicas de renderização baseadas em imagens permitem que novas visualizações de uma cena sejam geradas a partir de um conjunto de imagens, obtidas a partir de pontos de vista distintos. Pela extensão dessas técnicas para o tratamento de vídeos, podemos permitir a navegação no tempo e no espaço de uma cena obtida a partir de múltiplas câmeras. Nesse trabalho, abordamos o problema de gerar novas visualizações fotorealistas de cenas dinâmicas, com objetos móveis independentes, a partir de vídeos obtidos de múltiplas câmeras com pontos de vista distintos. Os desafios para a solução do problema envolvem a fusão das imagens das múltiplas câmeras minimizando as diferenças de brilho e cor entre elas, a detecção e extração dos objetos móveis da cena e a renderização de novas visualizações combinando um modelo estático da cena com os modelos aproximados dos objetos móveis. Além disso, é importante que novas visualizações possam ser geradas em taxas de quadro interativas de maneira a permitir que um usuário navegue com naturalidade pela cena renderizada. As aplicações dessas técnicas são diversas e incluem aplicações na área de entretenimento, como nas televisões digitais interativas que permitem que o usuário escolha o ponto de vista de filmes ou eventos esportivos, e em simulações para treinamento usando realidade virtual, onde é importante que se haja cenas realistas e reconstruídas a partir de cenas reais. Apresentamos um algoritmo para a calibração das cores capaz de minimizar a diferença de cor e brilho entre as imagens obtidas a partir de câmeras que não tiveram as cores calibradas. Além disso, descrevemos um método para a renderização interativa de novas visualizações de cenas dinâmicas capaz de gerar visualizações com qualidade semelhante à dos vídeos da cena. / Image-based rendering techniques allow the synthesis of novel scene views from a set of images of the scene, acquired from different viewpoints. By extending these techniques to make use of videos, we can allow the navigation in time and space of a scene acquired by multiple cameras. In this work, we tackle the problem of generating novel photorealistic views of dynamic scenes, containing independent moving objects, from videos acquired by multiple cameras with different viewpoints. The challenges presented by the problem include the fusion of images from multiple cameras while minimizing the brightness and color differences between them, the detection and extraction of the moving objects and the rendering of novel views combining a static scene model with approximate models for the moving objects. It is also important to be able to generate novel views in interactive frame rates allowing a user to navigate and interact with the rendered scene. The applications of these techniques are diverse and include applications in the entertainment field, with interactive digital televisions that allow the user to choose the viewpoint while watching movies or sports events, and in virtual-reality training simulations, where it is important to have realistic scenes reconstructed from real scenes. We present a color calibration algorithm for minimizing the color and brightness differences between images acquired from cameras that didn\'t have their colors calibrated. We also describe a method for interactive novel view rendering of dynamic scenes that provides novel views with similar quality to the scene videos.

Page generated in 0.0895 seconds