


Том 28, № 2 (2018)
- Год: 2018
- Статей: 18
- URL: https://bakhtiniada.ru/1054-6618/issue/view/12261
Mathematical Method in Pattern Recognition
On Finding the Maximum Feasible Subsystem of a System of Linear Inequalities
Аннотация
Some methods for finding the maximum feasible subsystems of systems of linear inequalities are considered. The problem of finding the most accurate algorithm in a parametric family of linear classification algorithms is one of the most important problems in machine learning. In order to solve this discrete optimization problem, an exact (combinatorial) algorithm, its approximations (relaxation and greedy combinatorial descent algorithms), and the approximation algorithm are given. The latter consists in replacing the original discrete optimization problem with a nonlinear programming problem by changing from linear inequalities to their sigmoid functions. The initial results of their comparison are presented.



A New Dimensionality Reduction Method with Correlation Analysis and Universum Learning
Аннотация
Dimensionality reduction (DR) is an important and essential preprocessing step in machine learning, possibly using discriminative information, neighbor information or correlation information and resulting in different DR methods. In this work, we design novel DR methods that employ another form of information, i.e., the maximal contradiction on Universum data which belong to the same domain as the task at hand, but do not belong to any class of the training data. It has been found that classification and clustering algorithms achieve favorable improvements with the help of Universum data and such learning methods are referred as to Univesum learning. Two new dimensionality reduction methods are proposed, termed as Improved CCA (ICCA) and Improved DCCA (IDCCA) respectively, that can simultaneously exploit the training data and the Universum data. Both of them can be expressed by generalized eigenvalue problem and solved by eigenvalue computation. The experiments on both synthetic and real-world datasets are presented to show that the proposed DR methods can obtain better performance.



Representation, Processing, Analysis, and Understanding of Images
Algorithms of Two-Dimensional Projection of Digital Images in Eigensubspace: History of Development, Implementation and Application
Аннотация
Algorithms for projection of digital images into their eigensubspaces in the framework of linear methods PCA, LDA, PLS and CCA are considered. The history of these methods development of over the past 100 years is given against the backdrop of the emergence of new areas of their application and changing requirements in relation to them. It is shown that this development was initiated by four basic requirements stemming from modern tasks and practice of digital image processing and, first of all, face images (FI). The first requirement is the use of PCA, LDA, PLS and CCA methods in conditions of both a small and extremely large samples of ILs in the initial sets. The second requirement is related to the criterion that determines its eigenbasis, and which should provide, for example, the minimum error of FI approximation, the improvement of clustering in its eigensubspace or the maximum correlation (covariance) between data sets in the subspace. The third one is related to the possibility of applying the methods under consideration to the tasks of processing two or more sets of images from different sensors or several sets of any number matrices. These three requirements led to the emergence, development and application of methods of two-dimensional projection into their eigensubspaces–2DPCA, 2DLDA, 2DPLS and 2DCCA. Several basic branches of algorithmic implementation of these methods are considered (iterative, not iterative, based on SVD, etc.), their advantages and disadvantages are evaluated, and examples of their use in practice are also shown. Finally, the fourth requirement is the possibility of realizing two-dimensional projections of FI (or other numerical matrices) directly in the layers of convolutional neural networks (CNN/Deep NN) and/or integrating their functions into NN by separate blocks. The requirement and examples of its solution are discussed. Estimates of computational complexity for the presented algorithms and examples of solving specific problems of image processing are given.



Radial Meixner Moment Invariants for 2D and 3D Image Recognition
Аннотация
In this paper, we propose a new set of 2D and 3D rotation invariants based on orthogonal radial Meixner moments. We also present a theoretical mathematics to derive them. Hence, this paper introduces in the first case a new 2D radial Meixner moments based on polar representation of an object by a one-dimensional orthogonal discrete Meixner polynomials and a circular function. In the second case, we present a new 3D radial Meixner moments using a spherical representation of volumetric image by a one-dimensional orthogonal discrete Meixner polynomials and a spherical function. Further 2D and 3D rotational invariants are derived from the proposed 2D and 3D radial Meixner moments respectively. In order to prove the proposed approach, three issues are resolved mainly image reconstruction, rotational invariance and pattern recognition. The result of experiments prove that the Meixner moments have done better than the Krawtchouk moments with and without nose. Simultaneously, the reconstructed volumetric image converges quickly to the original image using 2D and 3D radial Meixner moments and the test images are clearly recognized from a set of images that are available in a PSB database.



An Innovative Tree Gradient Boosting Method Based Image Object detection from a Uniform Background Scene
Аннотация
Object detection is a wide area problem domain in the field of computer and machine vision. Complex background adds challenge and error margin as well to the problem significantly lot algorithms for object detection are hard to comply with occlusion and pixel bending moment affect. In this paper a highly robust algorithm ORBTRIAN for a less resolution image has been proposed and implemented using ORB detection with gradient boosting machine learning algorithm. The work has been compared with Adaboost and Surf based technology. The analysis shows 3.8% increase in performance of earlier model. The feature points extracted from ORB method are further processed to reduce the processing further. Only those points are selected which are triangularly farthest from centroid of it and only 1 point of feature selected. Thus the result is around 28%, much faster than earlier computation. The tree based GB has been implemented in this algorithm. With more number of feature points more classes need to be recognized and hence the computations performed is required an unreasonable amount of effort and time. So some nearby classes are assigned at same level using our algorithm to reduce the number of tree nodes. Overall performance of the proposed algorithm shows a significant increase in efficiency in computation time.



Key Frame Extraction of Surveillance Video based on Moving Object Detection and Image Similarity
Аннотация
For the traditional method to extract the surveillance video key frame, there are problems of redundant information, substandard representative content and other issues. A key frame extraction method based on motion target detection and image similarity is proposed in this paper. This method first uses the ViBe algorithm fusing the inter-frame difference method to divide the original video into several segments containing the moving object. Then, the global similarity of the video frame is obtained by using the peak signal to noise ratio, the local similarity is obtained through the SURF feature point, and the comprehensive similarity of the video image is obtained by weighted fusion of them. Finally, the key frames are extracted from the critical video sequence by adaptive selection threshold. The experimental results show that the method can effectively extract the video key frame, reduce the redundant information of the video data, and express the main content of the video concisely. Moreover, the complexity of the algorithm is not high, so it is suitable for the key frame extraction of the surveillance video.



Applied Problems
Multicamera Human Re-Identification based on Covariance Descriptor
Аннотация
Human re-identification is a crucial component of security and surveillance systems, smart environments and robots. In this paper a novel selective covariance-based method for human re-identification in video streams from multiple cameras is proposed. Our method, which includes human localization and human classification stages, is called selective covariance-based because before classifying the object using covariance descriptors (in this case the classes are the different people being re-identified) we extract (selection) specific regions, which are definitive for the class of objects we deal with (people). In our case, the region being extracted is the human head and shoulders. In the paper new feature functions for covariance region descriptors are developed and compared to basic feature functions, and a mask, filtering out the most of the background information from the region of interest, is proposed and evaluated. The use of the proposed feature functions and mask significantly improved the human classification performance (from 75% when using basic feature functions to 94.6% accuracy with the proposed method), while keeping computational complexity moderate.



Improving Twin Support Vector Machine Based on Hybrid Swarm Optimizer for Heartbeat Classification
Аннотация
The computer-aided diagnosis system is used to reduce the high mortality rate among heart patients through detecting cardiac diseases at an early stage. Since the process of detecting the cardiac heartbeat is a hard task because of the human eye cannot be distinguished between the variations in electrocardiogram (ECG) signals due to they are very small. There are several machine learning approaches are applied to improve the performance of detecting the heartbeats, however, these methods suffer from some limitations such as high time computational and slow convergence. To avoid these limitations, this paper proposed an ECG heartbeat classification approach, called Swarm-TWSVM, that combined twin support vector machines (TWSVMs) with the hybrid between the particle swarm optimization with gravitational search algorithm (PSOGSA). Also, the empirical mode decomposition (EMD) has been applied for the ECG noise removing, and feature extraction, then PSOGSA was used to find the optimal parameters of TWSVM to improve the classification process. The experiments were performed using the MIT-BIH arrhythmia database and results show that the Swarm- TWSVM gives better accuracy than TWSVM 99.44 and 85.87%, respectively.



Improved Lane Line Detection Algorithm Based on Hough Transform
Аннотация
In order to simplify the lane line detection algorithm based on Hough transform, we propose an algorithm directly identifying lane line in Hough space. The image is conducted with Hough transform, and the points conforming to the parallel characteristics, length and angle characteristics, and intercept characteristics of lane line are selected in Hough space. The points were directly converted into the lane line equation. Also, the lane lines are conducted with fusion and property identification. The experimental results showed that the lane can be better identified on expressways and structured roads. Compared with tradition algorithm, the identification is effectively improved.



A Fast Fourier based Feature Descriptor and a Cascade Nearest Neighbour Search with an Efficient Matching Pipeline for Mosaicing of Microscopy Images
Аннотация
Automatic mosaicing is an important image processing application and we propose several improvements and simplifications to the image registration pipeline used in microscopy to automatically construct large images of whole specimen samples from a series of images. First of all we propose a feature descriptor based on the amplitude of a few elements of the Fourier transform, which makes it fast to compute and that can be used for any image matching and registration applications where scale and rotation invariance is not needed. Secondly, we propose a cascade matching approach that will reduce the time for the nearest neighbour search considerably, making it almost independent on feature vector length. Moreover, several improvements are proposed that will speed up the whole matching process. These are: faster interest point detection, a regular sampling strategy and a deterministic false positive removal procedure that finds the transformation. All steps of the improved pipeline are explained and the results comparative experiments are presented.



Change Detection based on Difference Image and Energy Moments in Remote Sensing Image Monitoring
Аннотация
Permanent control of environment by using remote sensing images requires effective techniques. Two new methods for remote sensing image change detection are proposed. The first method is based on the notion of difference image and image histograms. A complementary pair of images is proposed as the main presentation of a difference image which allows automatic separation of the changes of ground objects without loss or distortion. The use of the histograms in accordance with variations of image brightness (increasing and decreasing) provides opportunities for the assessment and experimental verification of existing approaches in the selection of automatic detection thresholds. The second method for change detection is based on energy moments for image rows and/or columns. It allows one to find image changes even in one pixel and differs from the existed methods by a more simple algorithm and possibility to extract even small changes. The proposed image representation can be considered as an integral feature of the whole image. The methods have been tested in real images. Comparing to start-of-the-art methods, our methods can detect changes in real-time with high accuracy when deployed on a standard computer.



A Real-Time Algorithm for Small Group Detection in Medium Density Crowds
Аннотация
In this paper, we focus on the task of small group detection in crowded scenarios. Small groups are widely considered as one of the basic elements in crowds, so it is a major challenge to distinguish group members from the individuals in the crowd. It is also a basic problem in video surveillance and scene understanding. We propose a solution for this task, which could run in real time and could work in both low and medium density crowded scenes. In particular, we build a social force based collision avoidance model on each individual for goal direction prediction, and employ the predicted goal directions instead of traditional positions and velocities in collective motion detection to find group members. We evaluate our approach over three datasets including tens of challenging crowded scenarios. The experimental results demonstrate that our proposed approach is not only highly accurate but also improves the practical property performance compared to other state-of-the-art methods.



An Obstacle Detection Method for Visually Impaired Persons by Ground Plane Removal Using Speeded-Up Robust Features and Gray Level Co-Occurrence Matrix
Аннотация
Rapid boost in the density of the pedestrians and vehicles on the roads have made the life of visually impaired people very difficult. In this direction, we present the design of a smart phone based cost-effective system to guide visually impaired people to walk safely on the roads by detecting obstacles in real-time scenarios. Monocular vision based method is used to capture the video and then frames are extracted out of it after removing the blurriness caused by the motion of camera. For each frame, a computationally simple approach based on the ground plane is proposed for detecting and removing the ground plane. After removing ground plane, features like Speeded-Up Robust Features (SURF) of the non-ground area are computed and compared with features of obstacles. An active contour model is used to segment the area of non-ground image whose SURF features are matched with obstacle features. This area is referred as Region of Interest (ROI). To check whether ROI belongs to an obstacle or not, Gray Level Co-occurrence matrix (GLCM) features are calculated and passed onto a classification model. Classification results show that this system is efficiently able to detect the obstacles that are known to the system in near real-time.



A Novel Approach based on Average Information Parameters for Investigation and Diagnosis of Lung Cancer using ANN
Аннотация
In this paper, an average informational parameter based approach for lung cancer detection and diagnosis has been proposed. Suggested methodology is established on average information parameters by utilizing image processing tools for lung cancer investigation. The real issue with the lung cancer diseases is the time constraint for physical diagnosis that expands the death possibilities. Henceforth essentially proposed technique is an approach that would help the medical practitioners for precise and superior decision against the lung cancer discovery. The crucial point in the proposed method is that it helps the doctors for taking a firm decision on lung cancer diagnosis. Microscopic lung images are taken for analysis and investigation by using digital image processing techniques which also recovers the quality of images that has been degraded by several reasons including random noise. The statistical parameters are implemented for lung cancer analysis. The statistical and mathematical parameters are implemented like Entropy, Standard Deviation, Mean, Variance and MSE under average information method. The statistical range of each parameter is calculated for number of iterations. The individual statistical parameter analysis with its impact on lung cancer images is carried out and finally the Artificial Neural Network is the final decision maker in lung cancer diagnosis. This paper also rejects the null hypothesis test by implementing one of the standard statistical methods.



Automatic Methods for Mycobacterium Detection on Stained Sputum Smear Images: a Survey
Аннотация
Mycobacterium tuberculosis (MTB) is one of the leading causes of adult morbidity and mortality worldwide, especially in developing countries like India. MTB is caused by the mycobacterium bacillus which mainly generates infections on lung region but sometimes affects other parts also. Sputum smear microscopy is the widely used tool for MTB diagnosis in most of the developing countries since it is less costly. Manual detection of bacilli from stained sputum images are time consuming since it may take 15 minutes per slide for detection, reducing number of slides which affects the accuracy of the output. Thus computer aided automatic methods provide obviously an optimum solution in disease diagnosis within less time and without highly experienced laboratory experts. There are so many papers published for automatic tuberculosis diagnosis from microscopic sputum images so far. This paper provides a survey of those published papers from the year 2002 to 2016. Thus it provides an overview of available methods and its accuracy and hence it will be useful for researchers and practitioners working in the field of automation of sputum smear microscopy.



Recognition of Handwritten Arabic Characters using Histograms of Oriented Gradient (HOG)
Аннотация
Optical Character Recognition (OCR) is the process of recognizing printed or handwritten text on paper documents. This paper proposes an OCR system for Arabic characters. In addition to the preprocessing phase, the proposed recognition system consists mainly of three phases. In the first phase, we employ word segmentation to extract characters. In the second phase, Histograms of Oriented Gradient (HOG) are used for feature extraction. The final phase employs Support Vector Machine (SVM) for classifying characters. We have applied the proposed method for the recognition of Jordanian city, town, and village names as a case study, in addition to many other words that offers the characters shapes that are not covered with Jordan cites. The set has carefully been selected to include every Arabic character in its all four forms. To this end, we have built our own dataset consisting of more than 43.000 handwritten Arabic words (30000 used in the training stage and 13000 used in the testing stage). Experimental results showed a great success of our recognition method compared to the state of the art techniques, where we could achieve very high recognition rates exceeding 99%.



Epileptiform Activity Detection and Classification Algorithms of Rats with Post-traumatic Epilepsy
Аннотация
In this paper, the problem of epileptiform activity in EEG of rats before and after Traumatic Brain Injury is considered. Experts in neurology performed a manual markup of signals as Epileptiform Discharges and Sleep Spindles. A proprietary Event Detection Algorithm based on time-frequency analysis of wavelet spectrograms was created. Feature space from PSD and Frequency of a detected event was created, and each feature was assessed for importance of epileptic activity prediction. Resulted predictors were used for training logistic regression model, which estimated features weights in probability of epilepsy function. Validation of proposed model was done on Monte-Carlo simulation of cross-validations. It was showed that the accuracy of prediction is around 80%. Proposed Epilepsy Prediction Model, as well as Event Detection Algorithm, can be applied to identification of epileptiform activity in long term records of rats and analysis of disease dynamics.



Algorithms for Optimal Localization of a Random Point-Pulse Source Uniformly Distributed over a Search Interval
Аннотация
A time-optimal technique is developed for spatial localization of a random point-pulse source that has a uniform distribution over a search interval and manifests itself in the generation of unit pulses (delta functions) at arbitrary instants. Localization is carried out using a receiver with a search window that arbitrarily tunes itself in time. The algorithms presented are generalized to the case where search is performed by a system of several receivers.


