With more and more vehicles in the city, traffic violations are becoming more and more serious. Vehicle violations rely on manual interpretation, which is not only inefficient, but also leads to a backlog of data and inconsistent law enforcement standards. In response to this situation, this paper proposes an artificial-intelligence-based algorithm for automatic interpretation of illegal behaviors of vehicles running red lights. This paper first discusses the modeling process of building a vehicle running a red light, including image data preprocessing, traffic light and vehicle recognition, vehicle red light detection, and license plate recognition; then an automatic interpretation algorithm for vehicle red light violations is designed. Finally, the actual traffic photos are used to test the algorithm. The experimental results show that the algorithm has a high recognition rate and can effectively automatically identify the illegal behavior of vehicles running red lights, so it can effectively solve the problem of low efficiency of manual judgment.
KEYWORDS: Data modeling, Feature selection, Feature extraction, Performance modeling, Statistical modeling, Systems modeling, Visualization, Fourier transforms, Signal detection, Information technology
This research explored the creation of a model to detect emotion from Filipino songs. The emotion model used was based from Paul Ekman’s six basic emotions. The songs were classified into the following genres: kundiman, novelty, pop, and rock. The songs were annotated by a group of music experts based on the emotion the song induces to the listener. Musical features of the songs were extracted using jAudio while the lyric features were extracted by Bag-of- Words feature representation. The audio and lyric features of the Filipino songs were extracted for classification by the chosen three classifiers, Naïve Bayes, Support Vector Machines, and k-Nearest Neighbors. The goal of the research was to know which classifier would work best for Filipino music. Evaluation was done by 10-fold cross validation and accuracy, precision, recall, and F-measure results were compared. Models were also tested with unknown test data to further determine the models’ accuracy through the prediction results.
A different approach of sign language recognition of static and dynamic hand movements was developed in this study using normalized correlation algorithm. The goal of this research was to translate fingerspelling sign language into text using MATLAB and Microsoft Kinect. Digital input image captured by Kinect devices are matched from template samples stored in a database. This Human Computer Interaction (HCI) prototype was developed to help people with communication disability to express their thoughts with ease. Frame segmentation and feature extraction was used to give meaning to the captured images. Sequential and random testing was used to test both static and dynamic fingerspelling gestures. The researchers explained some factors they encountered causing some misclassification of signs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.