• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 836
  • 613
  • 412
  • 69
  • 50
  • 49
  • 46
  • 24
  • 13
  • 9
  • 9
  • 9
  • 8
  • 7
  • 7
  • Tagged with
  • 2417
  • 564
  • 503
  • 255
  • 234
  • 232
  • 224
  • 198
  • 194
  • 190
  • 176
  • 174
  • 166
  • 165
  • 164
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Supervision Styles in Probation and Parole: An Analysis of Activities

Seiter, Richard P., West, Angela D. 01 January 2003 (has links)
Supervision of offenders in the community remains a critical component of the correctional processes in the United States. With almost six million offenders under correctional supervision in the community, there has been relatively little attention and few resources devoted to the style and quality of supervision received by these offenders. As a result of the lack of research regarding the style of probation and parole supervision, there is a need to identify and quantify styles of casework and surveillance supervision. This article describes a research project that identifies the key functions of parole and probation officers, reports self- and peer-rating on a casework to surveillance continuum, and establishes an instrument that can be used to create baseline information regarding how probation and parole officers spend their time, and whether the functions officers perform are casework, surveillance, or a balance of the two.
212

Epidemiology of US High School Sports-Related Fractures, 2005-2009

Swenson, David M., Yard, Ellen E., Collins, Christy L., Fields, Sarah K., Comstock, R. D. 01 July 2010 (has links)
Objective: To describe the epidemiology of fractures among US high school athletes participating in 9 popular sports. Design: Descriptive epidemiologic study. Setting: Sports injury data for the 2005-2009 academic years were collected using an Internet-based injury surveillance system, Reporting Information Online (RIO). Participants: A nationally representative sample of 100 US high schools. Assessment of risk factors: Injuries sustained as a function of sport and sex. Main outcome measures: Fracture injury rates, body site, outcome, surgery, and mechanism. Results: Fractures (n = 568 177 nationally) accounted for 10.1% of all injuries sustained by US high school athletes. The highest rate of fractures was in football (4.61 per 10 000 athlete exposures) and the lowest in volleyball (0.52). Boys were more likely than girls to sustain a fracture in basketball (rate ratio, 1.35,; 95% confidence interval, 1.06-1.72) and soccer (rate ratio, 1.34; 95% confidence interval, 1.05-1.71). Overall, the most frequently fractured body sites were the hand/finger (28.3%), wrist (10.4%), and lower leg (9.3%). Fractures were the most common injury to the nose (76.9%), forearm (56.4%), hand/finger (41.7%), and wrist (41.6%). Most fractures resulted in >3 weeks time lost (34.3%) or a medical disqualification from participation (24.2%) and were more likely to result in >3 weeks time lost and medical disqualification than all other injuries combined. Fractures frequently required expensive medical diagnostic imaging examinations such as X-ray, computed tomographic scan, and magnetic resonance imaging. Additionally, 16.1% of fractures required surgical treatment, accounting for 26.9% of all injuries requiring surgery. Illegal activity was noted in 9.3% of all fractures with the highest proportion of fractures related to illegal activity in girls soccer (27.9%). Conclusions: Fractures are a major concern for US high school athletes. They can severely affect the athlete's ability to continue sports participation and can impose substantial medical costs on the injured athletes families. Targeted, evidence-based, effective fracture prevention programs are needed.
213

A Social Contract Or Crossing Boundaries: Exploring Engagement in Lateral Surveillance on Social Media

Williams, Katorah, 0009-0000-8154-068X January 2023 (has links)
Americans currently live in a society marked by a vast surveillance dragnet that has continually evolved over time. One such evolution is the conceptualization of lateral surveillance. First, explored by Andrejevic (2005), lateral surveillance describes peer to peer surveillance. Though this is not a new phenomenon, research on lateral surveillance has been limited. Research on lateral surveillance on social media is even more underdeveloped, with the current literature focusing heavily on lateral surveillance on Facebook (Jiow & Morales, 2015; Lukacs and Quan-Haase ,2015, Ivana, 2013; Trottier, 2012). This is problematic given both the heavy presence of social media in our lives and the amount of lateral surveillance content that is posted across the most popular social media platforms. The current study uses an evolved grounded theory approach to address two main study objectives: (1) examine how college students engage in or avoid lateral (peer-to-peer) surveillance, both actively (as the surveillant) and passively (as an observer of lateral surveillance); and (2) understand what motivates college students to engage in lateral surveillance. This exploratory, qualitative study uses focus groups with college students who report using at least one of the following six social media platforms in the past five years: Facebook, Twitter, Instagram, YouTube, SnapChat, or TikTok. The focus was placed on college students, as they fall into the age demographic of the most active social media users. Drawing on the attitudes and perceptions of 69 Temple University students, across 12 focus groups, a decision-making framework which explains engagement in lateral surveillance was developed. Findings from this study highlighted the complexities and nuances of both general and lateral surveillance. College students believe surveillance is far reaching and omnipresent. However, they make a variety of distinctions about who engages in surveillance. They agree that the government and corporations are the main surveillors (those who engage in surveillance) in this county and are concerned with the amount of data that each of these entities collects about them. In contrast, they do not always view friends and family members as surveillors, despite them using the same mechanisms, such as tracking, watching, and recording, to engage in surveillance. These findings served as the contextual background under which they engage in lateral surveillance as exhibited by the decision-making framework. The decision-making framework provides a step-by-step walkthrough of the process that college students go through when deciding to engage in lateral surveillance. Findings related to the framework showed that there are a variety of situations and themes that influence lateral surveillance decision making. However, the actual decision-making process appears to be very quick and initial motivating factors that influenced the decision to engage in lateral surveillance remain consistent when deciding where and with whom to share the content. / Criminal Justice
214

Re-imagining Everyday Carcerality in an Age of Digital Surveillance

Gidaris, Constantine January 2020 (has links)
This dissertation project takes an interdisciplinary approach towards theorizing how we understand new modes of incarceration and confinement in the digital age. It makes key interventions in the fields of surveillance studies, carceral studies, critical data and technology studies, ethnic and racial studies. I argue that less conventional modes of incarceration and confinement, which are enabled through technologies, the Internet and processes of datafication, conceal the everyday carceral functions that target and exploit racialized people. Chapter 1 examines mobile carceral technologies that are part of Canada’s immigration and detention system. I investigate how notions of increased freedom that are associated with carceral technologies like electronic monitoring and voice reporting do not necessarily coincide with increased autonomy. In Chapter 2, I consider the relationship between mobile phone cameras and the rise of police body-worn cameras. More specifically, I examine how policing and surveillance technologies disproportionately take aim at Black people and communities, making the mere occupation of public and digital space extremely precarious. Lastly, in Chapter 3, I challenge the notion that biometric systems and technologies are race-neutral guarantors of identity, specifically within the polemical space of the modern airport. I argue that the airport’s security and surveillance infrastructure operates according to racialized knowledges, which unofficially validate the profiling of Muslim travelers by both human and non-human operators. / Dissertation / Doctor of Philosophy (PhD) / This dissertation encourages the reader to rethink notions of incarceration from both theoretical and practical perspectives; however, it is not a project about incarceration in the traditional sense. I argue that any notion of incarceration needs to be re-conceptualized in an age that is driven by big data and emergent technologies. While I draw on state and institutional forms of confinement in Canada, all of which have long and established histories of racism and oppression, I contend that notions of incarceration or confinement have bled into everyday life, particularly for racialized and marginalized people and communities. By surveying different surveillance technologies deployed across Canada’s immigration and detention system, the institution of policing and the biometric airport, I suggest that our understanding of the carceral has drastically changed. As issues of race, discrimination and oppression continue to underpin the structures of this newer carceral system and its modes of surveillance and confinement, it is a system that is less visible and physically confining but equally restrictive.
215

Taming Crowded Visual Scenes

Ali, Saad 01 January 2008 (has links)
Computer vision algorithms have played a pivotal role in commercial video surveillance systems for a number of years. However, a common weakness among these systems is their inability to handle crowded scenes. In this thesis, we have developed algorithms that overcome some of the challenges encountered in videos of crowded environments such as sporting events, religious festivals, parades, concerts, train stations, airports, and malls. We adopt a top-down approach by first performing a global-level analysis that locates dynamically distinct crowd regions within the video. This knowledge is then employed in the detection of abnormal behaviors and tracking of individual targets within crowds. In addition, the thesis explores the utility of contextual information necessary for persistent tracking and re-acquisition of objects in crowded scenes. For the global-level analysis, a framework based on Lagrangian Particle Dynamics is proposed to segment the scene into dynamically distinct crowd regions or groupings. For this purpose, the spatial extent of the video is treated as a phase space of a time-dependent dynamical system in which transport from one region of the phase space to another is controlled by the optical flow. Next, a grid of particles is advected forward in time through the phase space using a numerical integration to generate a "flow map". The flow map relates the initial positions of particles to their final positions. The spatial gradients of the flow map are used to compute a Cauchy Green Deformation tensor that quantifies the amount by which the neighboring particles diverge over the length of the integration. The maximum eigenvalue of the tensor is used to construct a forward Finite Time Lyapunov Exponent (FTLE) field that reveals the Attracting Lagrangian Coherent Structures (LCS). The same process is repeated by advecting the particles backward in time to obtain a backward FTLE field that reveals the repelling LCS. The attracting and repelling LCS are the time dependent invariant manifolds of the phase space and correspond to the boundaries between dynamically distinct crowd flows. The forward and backward FTLE fields are combined to obtain one scalar field that is segmented using a watershed segmentation algorithm to obtain the labeling of distinct crowd-flow segments. Next, abnormal behaviors within the crowd are localized by detecting changes in the number of crowd-flow segments over time. Next, the global-level knowledge of the scene generated by the crowd-flow segmentation is used as an auxiliary source of information for tracking an individual target within a crowd. This is achieved by developing a scene structure-based force model. This force model captures the notion that an individual, when moving in a particular scene, is subjected to global and local forces that are functions of the layout of that scene and the locomotive behavior of other individuals in his or her vicinity. The key ingredients of the force model are three floor fields that are inspired by research in the field of evacuation dynamics; namely, Static Floor Field (SFF), Dynamic Floor Field (DFF), and Boundary Floor Field (BFF). These fields determine the probability of moving from one location to the next by converting the long-range forces into local forces. The SFF specifies regions of the scene that are attractive in nature, such as an exit location. The DFF, which is based on the idea of active walker models, corresponds to the virtual traces created by the movements of nearby individuals in the scene. The BFF specifies influences exhibited by the barriers within the scene, such as walls and no-entry areas. By combining influence from all three fields with the available appearance information, we are able to track individuals in high-density crowds. The results are reported on real-world sequences of marathons and railway stations that contain thousands of people. A comparative analysis with respect to an appearance-based mean shift tracker is also conducted by generating the ground truth. The result of this analysis demonstrates the benefit of using floor fields in crowded scenes. The occurrence of occlusion is very frequent in crowded scenes due to a high number of interacting objects. To overcome this challenge, we propose an algorithm that has been developed to augment a generic tracking algorithm to perform persistent tracking in crowded environments. The algorithm exploits the contextual knowledge, which is divided into two categories consisting of motion context (MC) and appearance context (AC). The MC is a collection of trajectories that are representative of the motion of the occluded or unobserved object. These trajectories belong to other moving individuals in a given environment. The MC is constructed using a clustering scheme based on the Lyapunov Characteristic Exponent (LCE), which measures the mean exponential rate of convergence or divergence of the nearby trajectories in a given state space. Next, the MC is used to predict the location of the occluded or unobserved object in a regression framework. It is important to note that the LCE is used for measuring divergence between a pair of particles while the FTLE field is obtained by computing the LCE for a grid of particles. The appearance context (AC) of a target object consists of its own appearance history and appearance information of the other objects that are occluded. The intent is to make the appearance descriptor of the target object more discriminative with respect to other unobserved objects, thereby reducing the possible confusion between the unobserved objects upon re-acquisition. This is achieved by learning the distribution of the intra-class variation of each occluded object using all of its previous observations. In addition, a distribution of inter-class variation for each target-unobservable object pair is constructed. Finally, the re-acquisition decision is made using both the MC and the AC.
216

A Self-organizing Hybrid Sensor System With Distributed Data Fusion For Intruder Tracking And Surveillance

Palaniappan, Ravishankar 01 January 2010 (has links)
A wireless sensor network is a network of distributed nodes each equipped with its own sensors, computational resources and transceivers. These sensors are designed to be able to sense specific phenomenon over a large geographic area and communicate this information to the user. Most sensor networks are designed to be stand-alone systems that can operate without user intervention for long periods of time. While the use of wireless sensor networks have been demonstrated in various military and commercial applications, their full potential has not been realized primarily due to the lack of efficient methods to self organize and cover the entire area of interest. Techniques currently available focus solely on homogeneous wireless sensor networks either in terms of static networks or mobile networks and suffers from device specific inadequacies such as lack of coverage, power and fault tolerance. Failing nodes result in coverage loss and breakage in communication connectivity and hence there is a pressing need for a fault tolerant system to allow replacing of the failed nodes. In this dissertation, a unique hybrid sensor network is demonstrated that includes a host of mobile sensor platforms. It is shown that the coverage area of the static sensor network can be improved by self-organizing the mobile sensor platforms to allow interaction with the static sensor nodes and thereby increase the coverage area. The performance of the hybrid sensor network is analyzed for a set of N mobile sensors to determine and optimize parameters such as the position of the mobile nodes for maximum coverage of the sensing area without loss of signal between the mobile sensors, static nodes and the central control station. A novel approach to tracking dynamic targets is also presented. Unlike other tracking methods that are based on computationally complex methods, the strategy adopted in this work is based on a computationally simple but effective technique of received signal strength indicator measurements. The algorithms developed in this dissertation are based on a number of reasonable assumptions that are easily verified in a densely distributed sensor network and require simple computations that efficiently tracks the target in the sensor field. False alarm rate, probability of detection and latency are computed and compared with other published techniques. The performance analysis of the tracking system is done on an experimental testbed and also through simulation and the improvement in accuracy over other methods is demonstrated.
217

A Zone-Based Multiple Regression Model to Visualize GPS Locations on a Surveillance Camera Image

Moore, Daniel James 17 June 2015 (has links)
Surveillance cameras are integral in assisting law enforcement by collecting video information that may help officers detect people for whom they are looking. While surveillance cameras record the area covered by the camera, unlike humans, they cannot "understand" what is happening. My research uses multiple curvilinear regression models to accurately place differentially corrected GPS points with submeter accuracy onto a camera image. Optimal results were achieved after splitting the image into four zones with the focus on calibrating each area separately. This resulted in adjusted R2 values as high as 99.8 percent, indicating that high quality GPS points can form a good manual camera calibration. To ascertain whether or not a lesser quality GPS point associated with a social media application would allow location of the person sending the message, I used an iPhone 5s to do a follow up. Using the zone-based calibration equations on GPS point locations from an iPhone 5s show that the locations collected are less accurate than differentially corrected GPS locations, but there is still a decent chance of being able to locate the correct person in an image based off that person's location. That chance, however, depends on the population density inside the image. Pedestrian density tests show that about 70-80 percent of the phone locations in a low-density environment could be used to locate the correct person that sent a message while 30-60 percent of the phone locations could be used in that manner in a high-density environment. / Master of Science
218

Risk estimation and prediction of cyber attacks

Yermalovich, Pavel 17 February 2021 (has links)
L’utilisation de l’information est étroitement liée à sa sécurité. Le fait d’exploiter des vulnérabilités permet à une tierce personne de compromettre la sécurité d’un système d’information. La modélisation des menaces aide à prévoir les attaques les plus probables visant une infrastructure donnée afin de mieux les contrer. Le projet de recherche proposé « Estimation des risques et prédiction des cyber-attaques » vise la combinaison de différentes techniques de prédiction de cyber-attaques pour mieux protéger un système informatique. Il est nécessaire de trouver les paramètres les plus informatifs, à savoir les marqueurs de prédiction d’attaque, pour créer des fonctions de probabilité d’attaque en fonction de temps. La prédiction d’une attaque est essentielle pour la prévention des risques potentiels. Par conséquent, la prévision des risques contribue beaucoup à l’optimisation de la planification budgétaire de la sécurité de l’information. Ce travail scientifique se concentre sur l’ontologie et les étapes d’une cyber-attaque, ainsi que les principaux représentants du côté attaquant et leur motivation. La réalisation de ce travail scientifique aidera à déterminer, en temps réel, le niveau de risque d’un système d’information afin de le reconfigurer et mieux le protéger. Pour établir le niveau de risque à un intervalle de temps sélectionné dans le futur, il faut effectuer une décomposition mathématique. Pour ce faire, nous devons sélectionner les paramètres du système d’information requis pour les prévisions et leurs données statistiques pour l’évaluation des risques. Néanmoins, le niveau de risque réel peut dépasser l’indicateur établi. N’oublions pas que, parfois, l’analyse des risques prend trop de temps et établit des valeurs de risques déjà dépassées. Dans la réalisation de ce travail scientifique, nous continuerons d’examiner la question de l’obtention de valeurs de risque en temps réel. Pour cela, nous introduirons la méthode automatisée d’analyse des risques, qui aidera à révéler la valeur du risque à tout moment. Cette méthode constitue la base pour prédire la probabilité d’une cyber-attaque ciblée. Le niveau de risque établi permettra d’optimiser le budget de sécurité de l’information et de le redistribuer pour renforcer les zones les plus vulnérables. / The use of information is inextricably linked with its security. The presence of vulnerabilities enables a third party to breach the security of information. Threat modelling helps to identify those infrastructure areas, which would be most likely exposed to attacks. This research project entitled “Risk estimation and prediction of cyber attacks” aims to combine different techniques for predicting cyber attacks to better protect a computer system. It is necessary to find the most informative parameters, namely the attack prediction markers, to create functions of probability of attack as a function of time. The prediction of an attack is essential for the prevention of potential risk. Therefore, risk forecasting contributes a lot to the optimization of the information security budget planing. This scientific work focuses on ontology and stages of a cyberattack, as well as the main representatives of the attacking side and their motivation. Carrying out this scientific work will help determine, in real time, the risk level of an information system in order to reconfigure and better protect it. To establish the risk level at a selected time interval in the future, one has to perform a mathematical decomposition. To do this, we need to select the required information system parameters for the predictions and their statistical data for risk assessment. Nevertheless, the actual risk level may exceed the established indicator. Let us not forget that sometimes, the risk analysis takes too much time and establishes already outdated risk values. In this scientific work, we will continue reviewing the issue of obtaining real-time risk values. For this, we will introduce the automated risk analysis method, which will help to reveal the risk value at any time point. This method forms the basis for predicting the probability of a targeted cyber attack. The established risk level will help to optimize the information security budget and redistribute it to strengthen the most vulnerable areas.
219

Development of a Health Management Information System for the Mountain Gorilla (Gorilla Beringei)

Minnis, Richard Brian 09 December 2006 (has links)
The Mountain Gorillas of Central Africa are one of the most highly endangered species in the world, with only 740 individuals surviving. One of the greatest threats to this species is disease. Health of wildlife is continually garnering more attention in the public arena due to recent outbreaks of diseases such as West Nile and High Pathogenic Avian Influenza. However, no system currently exists to facilitate the management and analysis of wildlife health data. The research conducted herein was the development and testing of a health information monitoring system for the mountain gorillas entitled Internet-supported Management Program to Assist Conservation Technologies or IMPACT?. The system functions around a species database of known or unknown individuals and provides individual-based and population-based epidemiological analysis. The system also uses spatial locations of individuals or samples to link multiple species together based on spatial proximity for inter-species comparisons. A syndromic surveillance system or clinical decision tree was developed to collect standardized data to better understand the ecology of diseases within the gorilla population. The system is hierarchical in nature, using trackers and guides to conduct daily observations while specially trained veterinarians are used to confirm and assess any abnormalities detected. Assessment of the decision tree indicated that trackers and guides did not observe gorilla groups or individuals within groups similarly. Data suggests that, to be consistent, trackers and guides need to conduct observations even on the day that veterinarians collect data. Validity and reliability remain to be tested in the observation instrument. Assessment of pathogen loads and distributions within species surrounding the gorillas indicates that humans have the greatest pathogen loads with 13 species, followed by cattle and chimpanzees (11), baboon (10), gorillas (9), and rodents (3). Spatial aggregation occurred in Cryptosporidium, Giardia, and Trichuris; however, there is reason to question the test results of the former 2 species. These data suggest that researchers need to examine the impact of local human and domestic animal populations on gorillas and other wildlife.
220

A Real-Time Bi-Directional Global Positioning System Data Link Over Internet Protocol

Bhattacharya, Sumit 28 September 2005 (has links)
No description available.

Page generated in 0.2344 seconds