Spelling suggestions: "subject:"cofeatures"" "subject:"andfeatures""
491 |
Object Detection and Classification Based on Point Separation Distance Features of Point Cloud DataJi, Jiajie 07 August 2023 (has links)
No description available.
|
492 |
Improving House Price Prediction Models: Exploring the Impact of Macroeconomic FeaturesHolmqvist, Martin, Hansson, Max January 2023 (has links)
This thesis investigates if house price prediction models perform better when adding macroe- conomic features to a data set with only house-specific features. Previous research has shown that tree-based models perform well when predicting house prices, especially the algorithms random forest and XGBoost. It is common to rely entirely on house-specific features when training these models. However, studies show that macroeconomic variables such as interest rate, inflation, and GDP affect house prices. Therefore it makes sense to include them in these models and study if they outperform the more traditional models with only house-specific features. The thesis also investigates which algorithm, out of random forest and XGBoost is better at predicting house prices. The results show that the mean absolute error is lower for the XGBoost and random forest models trained on data with macroeconomic features. Furthermore, XGBoost outperformed random forest regardless of the set of features. In Con- clusion, the suggestion is to include macroeconomic features and use the XGBoost algorithm when predicting house prices.
|
493 |
Features, Functionality, and Acceptability of Internet-Based Cognitive Behavioral Therapy for Tinnitus in the United StatesManchaiah, Vinaya, Vlaescu, George, Varadaraj, Srinivas, Aronson, Elizabeth Parks, Fagelson, Marc A., F., Maria, Munoz, Andersson, Gerhard, Beukes, Eldre W. 28 July 2020 (has links)
Objective: Although tinnitus is one of the most commonly reported symptoms in the general population, patients with bothersome tinnitus are challenged by issues related to accessibility of care and intervention options that lack strong evidence to support their use. Therefore, creative ways of delivering evidence-based interventions are necessary. Internet-based cognitive behavioral therapy (ICBT) demonstrates potential as a means of delivering this support but is not currently available in the United States. This article discusses the adaptation of an ICBT intervention, originally used in Sweden, Germany, and the United Kingdom, for delivery in the United States. The aim of this study was to (a) modify the web platform's features to suit a U.S. population, (b) adapt its functionality to comply with regulatory aspects, and (c) evaluate the credibility and acceptability of the ICBT intervention from the perspective of health care professionals and patients with bothersome tinnitus.
Materials/Method:Initially, the iTerapi ePlatform developed in Sweden was adopted for use in the United States. Functional adaptations followed to ensure that the platform's functional and security features complied with both institutional and governmental regulations and that it was suitable for a U.S. population. Following these adaptations, credibility and acceptance of the materials were evaluated by both health care professionals (n = 11) and patients with bothersome tinnitus (n = 8).
Results:Software safety and compliance regulatory assessments were met. Health care professionals and patients reported favorable acceptance and satisfaction ratings regarding the content, suitability, presentation, usability, and exercises provided in the ICBT platform. Modifications to the features and functionality of the platform were made according to user feedback.
Conclusions:Ensuring that the ePlatform employed the appropriate features and functionalities for the intended population was essential to developing the Internet-based interventions. The favorable user evaluations indicated that the intervention materials were appropriate for the tinnitus population in the United States.
|
494 |
Communicating Affective Meaning from Software to Wetware Through the Medium of Digital ArtNorton, R David 01 August 2014 (has links) (PDF)
Computational creativity is a new and developing field of artificial intelligence concerned with computational systems that either autonomously produce original and functional products, or that augment the ability of humans to do so. As the role of computers in our daily lives is continuing to expand, the need for such systems is becoming increasingly important. We introduce and document the development of a new “creative” system, called DARCI (Digital ARtist Communicating Intention), that is designed to autonomously create novel artistic images that convey linguistic concepts to the viewer. Within the scope of this work, the system becomes capable of creating non-photorealistic renderings of existing image compositions so that they convey the semantics of given adjectives. Ultimately, we show that DARCI is capable of producing surprising artifacts that are competitive, in some ways, with those produced by human artists. As with the development of any “creative” system, we are faced with the challenges of incorporating the philosophies of creativity into the design of the system, assessing the system's creativity, overcoming technical shortcomings of extant modern algorithms, and justifying the system within its creative domain (in this case, visual art). In meeting these challenges with DARCI, we demonstrate three broad contributions of the system: 1) the contribution to the field of computational creativity in the form of an original system, new approaches to achieving autonomy in creative systems, and new practical assessment methods; 2) the contribution to the field of computer vision in the form of new image features for affective image annotation and a new dataset; and 3) the contribution to the domain of visual art in the form of mutually beneficial collaborations and participation in several art galleries and exhibits.
|
495 |
Feature Construction Using Evolution-COnstructed Features for General Object RecognitionLillywhite, Kirt D. 05 March 2012 (has links) (PDF)
Object recognition is a well studied but extremely challenging field. Human detection is an especially important part of object recognition as it has played a role in machine and human interaction, biometrics, unmanned vehicles, as well as tracking and surveillance. We first present a hardware implementation of the successful Histograms of Oriented Gradients (HOG) method for human detection. The implementation significantly speeds up the method achieving 38 frames a second on VGA video while testing 11,160 sliding windows per frame. The accuracy remains comparable to the CPU implementation. Analysis of the HOG method and other popular object recognition methods led to a novel approach for object detection using a feature construction method called Evolution-COnstructed (ECO) features. Most other approaches rely on human experts to construct features for object recognition. ECO features are automatically constructed by uniquely employing a standard genetic algorithm to discover series of transforms that are highly discriminative. Using ECO features provides several advantages over other object detection algorithms including: no need for a human expert to build feature sets or tune their parameters, ability to generate specialized feature sets for different objects, and no limitations to certain types of image sources. We show in our experiments that ECO features perform better or comparable with state-of-the-art object recognition algorithms making it the first feature construction method to compete with features created by human experts at general object recognition. An analysis is given of ECO features which includes a visualization of ECO features and improvements made to the algorithm.
|
496 |
A Content Analysis of Inquiry in Third Grade Science TextbooksLewis, Rebecca Adams 17 April 2012 (has links) (PDF)
Since the publication of the National Science Education Standards in 1996 efforts have been made to include inquiry into school science programs. An addendum on inquiry to these standards was published in 2000 presenting five essential features of classroom inquiry as indicators of the active use of inquiry in a science lesson. The purpose of this content analysis was to examine and identify the presence of these five essential features of classroom inquiry within publisher-identified inquiry activities found in the 2000 and 2010 teacher's editions of the third grade science textbooks published by Scott Foresman. The textbooks were read and coded using each of the five essential features of classroom inquiry as a priori categories. Data from both textbook editions indicated that although these activities were identified as inquiries, only a few contained all five essential features, while about half contained none. Approximately half of the publisher-identified inquiries were partial inquiries, containing less than five of the essential features. Teachers who use these resources should be aware of the presence or lack of the essential features in order to supplement the science curriculum. Publishers need to be more explicit in including these features and further research should be conducted in more textbooks to better understand the quality and quantity of inquiry activities found within these resources.
|
497 |
On the effect of architecture on deep learning based features for homography estimation / Angående effekten av arkitektur på djupinlärningsbaserade särdrag för homografi-estimeringÄhdel, Victor January 2018 (has links)
Keypoint detection and description is the first step of homography and essential matrix estimation, which in turn is used in Visual Odometry and Visual SLAM. This work explores the effect (in terms of speed and accuracy) of using different deep learning architectures for such keypoints. The fully convolutional networks — with heads for both the detector and descriptor — are trained through an existing self-supervised method, where correspondences are obtained through known randomly sampled homographies. A new strategy for choosing negative correspondences for the descriptor loss is presented, which enables more flexibility in the architecture design. The new strategy turns out to be essential as it enables networks that outperform the learnt baseline at no cost in inference time. Varying the model size leads to a trade-off in speed and accuracy, and while all models outperform ORB in homography estimation, only the larger models approach SIFT’s performance; performing about 1-7% worse. Training for longer and with additional types of data might give the push needed to outperform SIFT. While the smallest models are 3× faster and use 50× fewer parameters than the learnt baseline, they still require 3× as much time as SIFT while performing about 10-30% worse. However, there is still room for improvement through optimization methods that go beyond architecture modification, e.g. quantization, which might make the method faster than SIFT. / Nyckelpunkts-detektion och deskriptor-skapande är det första steget av homografi och essentiell matris estimering, vilket i sin tur används inom Visuell Odometri och Visuell SLAM. Det här arbetet utforskar effekten (i form av snabbhet och exakthet) av användandet av olika djupinlärnings-arkitekturer för sådana nyckelpunkter. De hel-faltade nätverken – med huvuden för både detektorn och deskriptorn – tränas genom en existerande själv-handledd metod, där korrespondenser fås genom kända slumpmässigt valda homografier. En ny strategi för valet av negativa korrespondenser för deskriptorns träning presenteras, vilket möjliggör mer flexibilitet i designen av arkitektur. Den nya strategin visar sig vara väsentlig då den möjliggör nätverk som presterar bättre än den lärda baslinjen utan någon kostnad i inferenstid. Varieringen av modellstorleken leder till en kompromiss mellan snabbhet och exakthet, och medan alla modellerna presterar bättre än ORB i homografi-estimering, så är det endast de större modellerna som närmar sig SIFTs prestanda; där de presterar 1-7% sämre. Att träna längre och med ytterligare typer av data kanske ger tillräcklig förbättring för att prestera bättre än SIFT. Även fast de minsta modellerna är 3× snabbare och använder 50× färre parametrar än den lärda baslinjen, så kräver de fortfarande 3× så mycket tid som SIFT medan de presterar runt 10-30% sämre. Men det finns fortfarande utrymme för förbättring genom optimeringsmetoder som övergränsar ändringar av arkitekturen, som till exempel kvantisering, vilket skulle kunna göra metoden snabbare än SIFT.
|
498 |
ESTIMATION AND FEATURE EXTRACTION TO SUPPORT 3D MODELLING FOR VIRTUAL BRIDGE INSPECTIONMaan Omar s Okayli (12850151) 01 September 2022 (has links)
<p> </p>
<p>For the agencies who are maintaining the transportation infrastructure, staying up to date with inspections is a continuing challenge. One approach to addressing that is to allow an inspector to perform most of the inspection process by viewing a digital 3D model, which is accurate and substantially complete. Having a digital 3D model could limit the on-site inspection process to those cases where the virtual inspection suggests more input is necessary. Such models would be defined by point clouds or by a surface composed of textured polygons. One of the advantages of building the 3D model via textured polygons instead of point clouds is that the inspector can zoom in and see the detail as needed. The data required to construct such a model are photographs that can be captured by a combination of handheld cameras and unmanned aerial vehicles (UAV). Having such a model will help these agencies to improve the efficiency of their inspection process in several ways, such as lowering the overall inspection costs, fewer lane closures during the inspection procedures, and having digital archives for their infrastructure. Of course, the time and effort to collect the images and build the model are substantial, but once a model is constructed, subsequent images can be applied as texture without recreating the model.</p>
<p>This research will cover the task of building an accurate 3D wireframe model for a bridge that can be used to display texture realistically via rigorous image projection onto the wireframe surface. The wireframe geometry will be substantially derived from extracted linear features. The model’s estimation process will integrate the photogrammetric bundle block adjustment technique with suitable methods to estimate the linear feature parameters. Prior to the developments above, an investigation has been done to determine the possibility of automating the process of selecting the conjugate points using <em>Structure-From-Motion</em> (SFM) algorithms, as implemented in programs such as <em>AGISOFT or PIX4D</em>. </p>
<p>In this kind of application, the bridge mostly has two types of linear features: the Straight Linear Features (SLF), which can be found on the component elements of the bridge structure, and the Parabolic Linear Features (PLF) for linear elements spanning the entire bridge length. After estimating the parameters of the linear features, the quadrilateral polygons used in the wireframe/visualization process can be extracted using these parameters. Furthermore, these quadrilateral polygons form the foundation for image texture projection. Also noteworthy, the process of generating these quadrilateral polygons is substantially automated.</p>
<p>Whenever doing least squares estimation, one needs a way to express the uncertainty of the computed parameters (unknowns). In the early stages of the project, one may not know the uncertainty of the observations. Often pairs of parameters (typically X, Y position) need their uncertainties to be displayed together, graphically, in the form of a confidence circle with a given probability. Under these conditions, the literature offers no guidance on how it should be constructed rigorously. This research develops such a technique. In geomatics, there are two cases when making confidence statements. The first one is when the observation uncertainties are known. If the case is 1D, the corresponding probability density function is the univariate normal distribution. When the case is 2D, the chi-squared distribution will be used for the elliptical region, and the multivariate normal distribution will be used when making confidence circles. The second condition is when the uncertainties of the observations are unknown. When these uncertainties are unknown, the univariate t-distribution will be used to make the 1D confidence statement. The F-distribution will be used for the elliptical region. For a confidence circle, the multivariate t-distribution must be used. This research will present an algorithm to implement this process and show, numerically, that it is valid. </p>
|
499 |
Identification Of System Design Features That Affect Sickness In Virtual EnvironmentsDrexler, Julie 01 January 2006 (has links)
The terms "simulator" and "VR" are typically used to refer to specific types of virtual environments (VEs) which differ in the technology used to display the simulated environment. While simulators and VR devices may offer advantages such as low cost training, numerous studies on the effects to humans of exposure to different VEs indicate that motion sickness-like symptoms are often produced during or after exposure to the simulated environment. These deleterious side effects have the potential to limit the utilization of VE systems if they jeopardize the health and/or safety of the user and create liability issues for the manufacturer. The most widely used method for assessing the adverse symptoms of VE exposure is the Simulator Sickness Questionnaire (SSQ). The method of scoring the symptoms reported by VE users permits the different sickness symptoms to be clustered into three general types of effects or subscales and the distribution or pattern of the three SSQ subscales provides a profile for a given VE device. In the current research, several different statistical analyses were conducted on the SSQ data obtained from 21 different simulator studies and 16 different VR studies in order to identify an underlying symptom structure (i.e., SSQ profile) or severity difference for various types of VE systems. The results of the research showed statistically significant differences in the SSQ profiles and the overall severity of sickness between simulator and VR systems, which provide evidence that simulator sickness and VR sickness represent distinct forms of motion sickness. Analyses on three types of simulators (i.e., Fixed- and Rotary-Wing flight simulators and Driving simulators) also found significant differences in the sickness profiles as well as the overall severity of sickness within different types of simulator systems. Analyses on three types of VR systems (i.e., HMD, BOOM, and CAVE) revealed that BOOM and CAVE systems have similar sickness profiles, which are different than the HMD system profile. Moreover, the results showed that the overall severity of sickness was greater in HMD systems than in BOOM and CAVE systems. Recommendations for future research included additional psychophysical studies to evaluate the relationship between various engineering characteristics of VE systems and the specific types of sickness symptoms that are produced from exposure to them.
|
500 |
Game On: The Impact Of Game Features In Computer-based TrainingDeRouin-Jessen, Renee 01 January 2008 (has links)
The term "serious games" became popularized in 2002 as a result of an initiative to promote the use of games for education, training, and other purposes. Today, many companies are using games for training and development, often with hefty price tags. For example, the development budget for the U.S. Army recruiting game, "America's Army" was estimated at $7 million. Given their increasing use and high costs, it is important to understand whether game-based learning systems perform as billed. Research suggests that games do not always increase learning outcomes over conventional instruction. However, certain game features (e.g., rules/goals, fantasy, challenge) might be more beneficial for increasing learner motivation and learning outcomes than other game features. This study manipulated two specific game features: multimedia-based fantasy (vs. text-based fantasy) and reward (vs. no reward) in a computer-based training program on employment law. Participants (N=169) were randomly assigned to one of the four experimental conditions or to a traditional computer-based training condition. Contrary to hypotheses, the traditional PowerPoint-like version was found to lead to better declarative knowledge outcomes on the learning test than the most game-like version, although no differences were found between conditions on any of the other dependent variables. Participants in all conditions were equally motivated to learn, were equally satisfied with the learning experience, completed an equal number of practice exercises, performed equally well on the declarative knowledge and skill-based practice, and performed equally well on the skill-based learning test. This suggests that adding the "bells and whistles" of game features to a training program won't necessarily improve learner motivation and training outcomes.
|
Page generated in 0.0434 seconds