


Vol 27, No 3 (2017)
- Year: 2017
- Articles: 32
- URL: https://bakhtiniada.ru/1054-6618/issue/view/12249
Mathematical Method in Pattern Recognition
Approximation polynomial algorithm for the data editing and data cleaning problem
Abstract
The work considers the mathematical aspects of one of the most fundamental problems of data analysis: search (choice) among a collection of objects for a subset of similar ones. In particular, the problem appears in connection with data editing and cleaning (removal of irrelevant (not similar) elements). We consider the model of this problem, i.e., the problem of searching for a subset of maximal cardinality in a finite set of points of the Euclidean space for which quadratic variation of points with respect to its unknown centroid does not exceed a given fraction of the quadratic variation of points of the input set with respect to its centroid. It is proved that the problem is strongly NP-hard. A polynomial 1/2-approximation algorithm is proposed. The results of the numerical simulation demonstrating the effectiveness of the algorithm are presented.



Automation of image categorization with most relevant negatives
Abstract
Image categorization requires the algorithm to be learned in order to obtain the efficient categorization. The algorithm used for image categorization may misclassify images that are visually similar to the positive ones. Generally, sampling negatives is done at random. In this paper, we have improved Negative Bootstrap in an efficient way to obtain most relevant negatives. To obtain most misclassified visually similar images in a faster way, fast intersection kernel SVM is generalized and used for classification. The accuracy of classified visual concepts is obtained by using the performance metrics. Several different metrics have been used to show the accuracy of relevant negatives. Manual labeling of negatives could be avoided by using the efficient negative bootstrap algorithm.



A method of reducing the number of members of the committee systems of linear inequalities
Abstract
The problem of discriminant analysis under mild conditions is reduced to a system of linear inequalities. However, this system can turn out to be inconsistent, which happens rather frequently. This is when the method of committees is used. The quality of a committee is improved as the number of its members drops. A method of reducing the number of members of committees, if this is fundamentally possible, is considered.



Computer geometry algorithms in feature space dimension reduction problems
Abstract
One of the goals of multidimensional feature space dimension reduction is to visualize the arrangement of classes on a plane. The quality of such a representation depends on the dimension-reduction method and the used measure of the closeness of classes. In this paper, we propose a technology of dimension reduction to two dimensions based on the calculation of convex hulls of classes and determination of their intersection degree. Experiments carried out in the MATLAB system show that this technology on the training sample makes it possible to identify the minimum potentially achievable intersection degree of classes intersecting in a multidimensional space and to obtain a mutual mapping of classes on the plane corresponding to the minimum degree of their intersection.



Vicinal support vector classifier: A novel approach for robust classification based on SKDA
Abstract
In this paper, we present a detailed study and comparison of different classification algorithms. Our main purpose is the study of the Vicinal Support Vector Classifier (VSVC) and its relations to the other state-of-the-art classifiers. To this end, we start by the historical development of each classifier, derivation of the mathematics behind it and describing the relations that exist between some of them, in particular the relation between the VSVC and the other classifiers. Thereafter, we apply them to two famous learning datasets very used by the research community, namely the MIT-CBCL face and the Wisconsin Diagnostic Breast Cancer (WDBC) datasets. We show that despite its simplicity compared to the other state-of-the-art classifiers, the VSVC leads to very robust classification results and provide some practical advantages compared to the other classifiers.



New model metrics between relations of n-valued logic and uncertainty of automatic clustering of statements
Abstract
Statements that can be recorded by logical multivalued relations are studied. Using model theory, we introduce relations such that various logical values are taken into account most completely in the distances between relations of Lukasevich n-valued logic, introduce an uncertainty measure for statements, and formulate and prove theorems about the properties of these values. Using the introduced distances and uncertainty measures, we adapt known clustering algorithms for the clustering of sets of statements and use examples to examine results for various values of n. We study collective distances, which are the most effective in the sense of clustering indices and can be used to generate new distances for more powerful sets of statements.



Application driven inverse type constraint satisfaction problems
Abstract
Mathematical science in tight integration with the natural and life sciences provide sophisticated models and algorithms of knowledge acquisition. Overall rapid development of nowadays science and technologies is a direct consequence of such interdisciplinary developments. The same time, there exist particular research domains, e.g., the biological research domain, that appears as a semi-exact discipline (except of its genomics part). This is due to insufficient level of readiness of mathematical, physical and chemical sciences, being in power to provide the proper interpretation of the real size biological interrelations. Enrichment of mathematics to this new sophisticated level may be a long lasting but unavoidable process. Current series of studies is driven by a number of applied problems that are focused around the unique integrative technique of getting knowledge from fragmented data and descriptions. A large number of links connect these ideas to the problems of: artificial intelligence, intelligent information management, high performance computation, and other research domains. Our objective is in provision of integrated solutions of inverse type combinatorial problems that will help to sustain a set of applied problems. The base set of applied problems we consider involves the radiation therapy planning, the wireless sensor network integrated-connectivity-coverage protocols with its decentralized management, and the network tomography scenario devoted to the issue of energy minimization in networks. As we will see the necessary mathematical technique of solving these problems is focused around the model description by the sets of constraints and relations, by integration of the partial knowledge about these models. Two scenarios have been considered. One is based on the use of projections and their interpretation, and the second is based on the local neighbourhood analysis (generic projections). Given projections and/or neighbourhoods (in inverse manner), it is to analyse the consistency issue; to know the properties of the set of solutions; and to reconstruct one or all solutions of these problems. In mathematical level the problem is related to the well-known open problems such as the Berge’s hypothesis about the simple hypergraphic degree sequences. The technique to be used in investigations involves Boolean domain studies and the n-cube geometry, Lagrangean relaxation together with integer linear programming, Minkowski geometry, Voronoi diagrams, and the Constraint satisfaction mathematics. Prototype solutions and demonstrations to the mentioned applied problems will be provided.



Linear classifiers and selection of informative features
Abstract
In this work, to construct classifiers for two linearly inseparable sets, the problem of minimizing the margin of incorrect classification is formulated, approaches to achieving approximate solution, and calculation estimates of the optimal value for this problem, are considered. Results of computational experiments that compare proposed approaches with SVM are presented. The problem of identifying informative features for large-dimensional diagnostic applications is analyzed and algorithms for its solution are developed.



Representation, Processing, Analysis, and Understanding of Images
Applications of algebraic moments for edge detection for locally linear model
Abstract
We describe a subpixel edge detection approach in images. The proposed approach is based on the algebraic moments of the brightness function of halftone images. For an ideal two-dimensional edge, we consider a model with the following four parameters: the edge orientation, the distance from the edge to the center of the mask, and the brightness values from both sides of the edge. To obtain all subpixel parameters of the edge, six algebraic moments are used. To compute the moments rapidly, masks are used. The specificity of the proposed approach is as follows: masks of almost all sizes can be used and they are computed by means of explicit relations provided in the present paper as well. Increasing mask sizes, one can increase the accuracy of the detection of subpixel edge parameters, which is especially important for high-definition images. We present experiments displaying the efficiency of the proposed approach.



Combining rules using local statistics and uncertainty estimates for improved ensemble segmentation
Abstract
Segmentation using an ensemble of classifiers (or committee machine) combines multiple classifiers’ results to increase the performance when compared to single classifiers. In this paper, we propose new concepts for combining rules. They are based (1) on uncertainties of the individual classifiers, (2) on combining the result of existing combining rules, (3) on combining local class probabilities with the existing segmentation probabilities at each individual segmentation, and (4) on using uncertainty-based weights for the weighted majority rule. The results show that the proposed local-statistics-aware combining rules can reduce the effect of noise in the individual segmentation result and consequently improve the performance of the final (combined) segmentation. Also, combining existing combining rules and using the proposed uncertainty- based weights can further improve the performance.



Fast computation of local displacement by stereo pairs
Abstract
A new method for fast computation of the camera/robot local displacement (6 DOF) based on matched 3D point clouds obtained from images using computer vision methods is proposed. According to the method, the local geometric transformation matrix is computed based on the combination of external coordinate systems generated from random sample points. Comparative estimates of the efficiency of the method are made using data of computational experiments.



Sparse coding for image classification base on spatial pyramid representation
Abstract
Many efforts have been devoted to apply sparse coding for image classification with the aim of minimizing the reconstruction error and classification error. So far, the approaches have been proposed either separate the reconstruction and classification process which leave rooms for further optimization or form a complicated training model which cannot be resolved efficiently. In this paper, we first propose extracting the spatial pyramid representation as the image feature which forms the foundation of dictionary learning and sparse coding. Then we develop a novel sparse coding model which can learn the dictionary and classifier simultaneously in which form we can get the optimal result and can be solved efficiently by K-SVD. Experiments show that the suggested approach, in terms of classification accuracy and computation time, outperforms other well-known approaches.



Texture classification using partial differential equation approach and wavelet transform
Abstract
Textures and patterns are the distinguishing characteristics of objects. Texture classification plays fundamental role in computer vision and image processing applications. In this paper, texture classification using PDE (partial differential equation) approach and wavelet transform is presented. The proposed method uses wavelet transform to obtain the directional information of the image. A PDE for anisotropic diffusion is employed to obtain texture component of the image. The feature set is obtained by computing different statistical features from the texture component. The linear discriminant analysis (LDA) enhances separability of texture feature classes. The features obtained from LDA are class representatives. The proposed approach is experimented on three gray scale texture datasets: VisTex, Kylberg, and Oulu. The classification accuracy of the proposed method is evaluated using k-NN classifier. The experimental results show the effectiveness of the proposed method as compared to the other methods in the literature.



Robust image matching with cascaded outliers removal
Abstract
Finding feature correspondences between a pair of images is a fundamental problem in computer vision for 3D reconstruction and target recognition. In practice, for feature based matching methods, there is often having a higher percentage of incorrect matches and decreasing the matching accuracy, which is not suitable for subsequent processing. In this paper, we develop a novel algorithm to find good and more correspondences. Firstly, detecting SURF keypoints and extracting SURF descriptors; Then Obtain the initial matches based on the Euclidean distance of SURF descriptors; Thirdly, remove false matches by sparse representation theory, at the same time, exploiting the information of SURF keypoints, such as scale and orientation, forming the geometrical constraints to further delete incorrect matches; Finally, adopt Delaunay triangulation to refine the matches and get the final matches. Experimental results on real-world image matching datasets demonstrate the effectiveness and robustness of our proposed method.



A multiresolution wavelet networks architecture and its application to pattern recognition
Abstract
This paper aims at addressing a challenging research in both fields of the wavelet neural network theory and the pattern recognition. A novel architecture of the wavelet network based on the multiresolution analysis (MRWN) and a novel learning algorithm founded on the Fast Wavelet Transform (FWTLA) are proposed. FWTLA has numerous positive sides compared to the already existing algorithms. By exploiting this algorithm to learn the MRWN, we suggest a pattern recognition system (FWNPR). We show firstly its classification efficiency on many known benchmarks and then in many applications in the field of the pattern recognition. Extensive empirical experiments are performed to compare the proposed methods with other approaches.



Non-blind digital watermarking with enhanced image embedding capacity using DMeyer wavelet decomposition, SVD, and DFT
Abstract
Amongst the requirements of digital color image watermarking–capacity is the major component to be addressed effectively. To address the same we proposed a method for inserting a color image into another color image of same size using non-blind watermarking scheme. From this method we achieved reasonably good perceptual similarity by measuring acceptable peak signal to noise ratio (PSNR) and structural similarity (SSIM) index. The method uses DMeyer single level discrete wavelet transformation (DWT) to get approximation coefficients-where most of the image information is stored, discrete Fourier transformation (DFT) is used to get set of components which are sufficient to describe the whole image and singular value decomposition (SVD) to get reliable orthogonal matrix of computationally sustainable components of the transformed image. The method is robust against attacks like–rotation, cropping, JPEG compression and for noises–salt and pepper, gaussian, speckle.



Software and Hardware for Pattern Recognition and Image Analysis
Research support system for stochastic data processing
Abstract
The paper describes a research support system named “MSM Tools” that can be used for stochastic modelling of real processes in various information systems and implements the heterogeneous computing paradigm. The proposed approach to data mining is based on method of moving separation of probability mixtures. To obtain statistical estimations of the unknown parameters of mixed probability models, the system implements several modifications of so-called EM algorithm (including grid modifications for the NVIDIA CUDA architecture) that are commonly used in such areas as pattern recognition, clustering, classification, processing of censored and/or truncated data. An example of real data analysis via “MSM Tools” service is given.



Applied Problems
Gait recognition based on curvelet transform and PCANet
Abstract
Conventional gait recognition schemes has poor recognition accuracies in presence of covariates. It is mainly due to ineffective and inefficient representation and discriminative feature extraction schemes. The paper presents new technique to extract discriminative features from masked gait energy image based on curvelet transform and PCANet. The binary gait silhouette video sequence obtained from pre-processing of video sequence is converted in to masked gait energy image and then direction and edge representation ability of fast discrete curvelet transform is employed. Nonlinear and non invertible, image space to feature space mapping scheme of PCANet is used to extract discriminative robust features. The suitability and effectiveness of newly proposed scheme is demonstrated by experimentation on standard publicly available benchmark USF HumanID database.



Star pattern recognition based on features invariant under rotation
Abstract
A star pattern recognition algorithm is proposed on the basis of features invariant under rotation. Guidance star identification via the algorithm is performed on star images captured by star sensor and simulated images. The results indicate that the proposed method presents better robustness against position and magnitude noise than conventional ones and eliminates rotation procedure and avoids the influence caused by grid size choice. The database feature storage for each pattern consists of simply four floating point numbers.



Development, investigation, and software implementation of a new mathematical method for automated identification of the lipid layer state by images of eyelid intermarginal space
Abstract
A new mathematical method is presented for identifying the state of the lipid layer from images of the intermarginal space of human eyelids. As initial data, we use images of imprints of the intermarginal space of eyelids on a millipore filter after their staining with osmium vapor. Each image is a grey-scale photograph in which the dark regions represent imprints of the sebaceous secretions from the orifices of the excretory ducts of the meibomian glands. The proposed method is designed for extracting morphometric data from images of intermarginal space imprints. The method yields the following statistical characteristics: expectation and rootmean- square deviation of the imprint’s intensity on a line drawn and in a selected region, the imprint’s intensity along a line drawn, and variations in the imprint’s thickness. The proposed approach is based on the combined use of the techniques of mathematical morphology and mathematical statistics. The software implementation of the developed method, as well as the results of its experimental testing, are presented.



Development, investigation, and software implementation of a new mathematical method for automating the analysis of corneal endothelium images
Abstract
A new mathematical method is presented for processing and analyzing microscopic images of the epithelium posterius (endothelium) in the human eye cornea. As initial data, we use images of endothelial cells that are obtained noninvasively using a confocal microscope. Each image is a black-and-white photograph of the endothelial layer in the eye cornea with a bright-lighted center and dark periphery. The endothelial cells depicted in the images are light-colored, mostly hexagonal figures with a dark periphery. The proposed method is intended for detecting endothelial cells and determining some of their morphometric characteristics. The method yields an image with marked cells of hexagonal, pentagonal, and other (tetragonal and heptagonal) shapes, as well as a set of their characteristics. The proposed approach combines methods of mathematical morphology and image segmentation. The software implementation of the developed method, as well as the results of its experimental testing, is presented.



Algorithm for segmenting script-dependant portion in a bilingual Optical Character Recognition system
Abstract
Documents may contain multiscript and recognition of those documents is really a challenging task. Earlier OCR (Optical Character Recognition) was developed for documents containing only English or regional languages. Documents containing multiple scripts are also needed to be kept protected for later use. So it needs more effort of the OCR designers for improving the accuracy rate for multi script OCR. In this paper we describe the character recognition process for printed documents containing Roman and Odia texts. The separation of the script has been performed at line level. We discuss a detailed description on the segmentation scheme using X-Y-Cut algorithm which isolates the text image into individual Odia and Roman line. To distinguish between the Roman and the Odia script, along with line height, we have considered the features of both the scripts. Most of the Roman character is linear as well as circular in nature and Odia characters are circular occupying more width as compared to Roman characters. We emphasize on Upper and lower Matras associated with Odia and absent in English. After extracting the individual scripts from the bilingual documents line wise, we send them to their individual OCR for recognition. Thus an algorithm has been proposed for identification of Odia and Roman scripts.



Video-based arm motion estimation and interaction with fuzzy predictive control
Abstract
Multi-projector display systems are gaining popularity for use in immersive virtual reality applications and scientific visualization. While recent work has addressed the issues of human interfaces to hide the distributed nature of those systems, there has been relatively little work on natural interactive modalities. In this paper, based on the discrete characteristics of nodes distribution and the spatio-temporal coherence of the user’s movement, we propose a non-contact interaction solution for multi-projector display system. Utilizing a virtual three-dimensional interactive rectangular parallelepiped, we establish correspondence between the virtual scene and the user’s arm position information. For robustly tracking the user’s arm position, an arm motion estimation method is designed based on the fuzzy predictive control theory. To verify the efficiency and accuracy of the proposed method, various motion estimation algorithms were tested with and without the fuzzy predictive control theory to stabilize the output.



Analysis and identification of kidney stone using Kth nearest neighbour (KNN) and support vector machine (SVM) classification techniques
Abstract
Kidney stone detection is one of the sensitive topic now-a-days. There are various problem associates with this topic like low resolution of image, similarity of kidney stone and prediction of stone in the new image of kidney. Ultrasound images have low contrast and are difficult to detect and extract the region of interest. Therefore, the image has to go through the preprocessing which normally contains image enhancement. The aim behind this operation is to find the out the best quality, so that the identification becomes easier. Medical imaging is one of the fundamental imaging, because they are used in more sensitive field which is a medical field and it must be accurate. In this paper, we first proceed for the enhancement of the image with the help of median filter, Gaussian filter and un-sharp masking. After that we use morphological operations like erosion and dilation and then entropy based segmentation is used to find the region of interest and finally we use KNN and SVM classification techniques for the analysis of kidney stone images.



Classification of medicinal plant leaf image based on multi-feature extraction
Abstract
Medicinal plants are the main source of traditional Chinese medicine (TCM), which provides the basic protection of human health. The research and application of medicinal plant classification methodology has important implications in the TCM resource preservation, TCM authentication, and the teaching method of TCM identification. This paper proposes an automatic classification method based on leaf images of medicinal plants to address the limitation of manual classification method in identifying medicinal plants. Our approach will first preprocess the leaf images of medicinal plants; then it will compute the ten shape feature (SF) and five texture characteristics (TF); finally, it will classify the leaves of medicinal plants using support vector machine (SVM) classifier. The classifier has been applied to 12 different medicinal plant leaf images and achieved an average successful recognition rate of 93.3%. The result indicates that it is feasible to automatically classify medicinal plants by using multi-feature extraction of leaf images in combination with SVM. The paper provides a valuable theoretical framework in the research and development of medicinal plant classification system.



Vibrational and hydroacoustic signal processing in the frequency domain and its software-hardware implementation
Abstract
The paper discusses an algorithm for spectral density estimation in the frequency domain using wavelet-based smoothing by wavelet thresholding techniques. The suggested algorithm can be applied to vibrational and hydroacoustic signal processing in order to estimate signal parameters in the frequency domain. The algorithm can be applied to the Fourier periodogram without using signal samples in the time domain. We also propose a technique aimed at working with signals of arbitrary length by applying the maximal overlap discrete wavelet transform. We also study software-hardware implementation of the developed algorithms.



A new method for automating the investigation of stem cell populations based on the analysis of the integral optical flow of a video sequence
Abstract
A new method is presented for automating the investigation of stem cell populations based on the automatic analysis of video sequences of images. For automatic image analysis, an integral optical flow apparatus is used. The proposed method classifies dynamic objects by constructing a pyramid of the integral optical flow. The main stages of the method are as follows: image capturing and processing, segmentation, measurement, and tissue description generation. Experimental tests confirm that the proposed method is capable of identifying the main stages of cellular development (mitosis, differentiation, and apoptosis).



Satellite image-based ancient dwelling fingerprint detection algorithm
Abstract
The paper proposes an elastic grid technique to separate satellite target image into some intersections of characteristic rows and columns, which can extract more local texture features. Then, the relevant statistics of gray level co-occurrence matrix (GLCM) reflected the regional global texture features are used to generate some feature cells, which can be merged into a fingerprint array. The fingerprint can preserve the global and local features of the target satellite image. Because these cells have different ranges and variances, each cell can be projected into a Gauss kernel space by using its relevant Gauss function. The similarity of fingerprint cell components can be calculated by the product of fingerprint cells by using Gauss measure. The similarity of fingerprint array can be calculated as the sum of similarities of fingerprint cells. The area algorithm and shape algorithm can be used to preliminary separate the dwelling targets from village satellite images, and the preliminary detection accurate rate is higher. When the dwelling fingerprint algorithm based on elastic grid and GLCM features is used, the accurate chronicle classification rate can reach more than 85.2%.



Optic disc and cup segmentation methods for glaucoma detection with modification of U-Net convolutional neural network
Abstract
Glaucoma is the second leading cause of blindness all over the world, with approximately 60 million cases reported worldwide in 2010. If undiagnosed in time, glaucoma causes irreversible damage to the optic nerve leading to blindness. The optic nerve head examination, which involves measurement of cup-todisc ratio, is considered one of the most valuable methods of structural diagnosis of the disease. Estimation of cup-to-disc ratio requires segmentation of optic disc and optic cup on eye fundus images and can be performed by modern computer vision algorithms. This work presents universal approach for automatic optic disc and cup segmentation, which is based on deep learning, namely, modification of U-Net convolutional neural network. Our experiments include comparison with the best known methods on publicly available databases DRIONS-DB, RIM-ONE v.3, DRISHTI-GS. For both optic disc and cup segmentation, our method achieves quality comparable to current state-of-the-art methods, outperforming them in terms of the prediction time.



Application of IF set oscillation in the field of face recognition
Abstract
The situation when uncertainty rules the circumstances some inefficiency of crisp set can be observed. This was overcome by application of fuzzy set for solving different real life problems. Intuitionistic fuzzy set was introduced to handle complex circumstances and to provide better result. Fuzzy set proves its efficiency in field data processing. Minimal structure fuzzy oscillation can be efficiently applied for image processing, especially for face recognition. In this paper we introduce the concept of Intuitionistic fuzzy set or IF Set based minimal structure oscillation. A face database which may be considered as training set, is used to form IF set minimal structure. Again non membership pixel values are also calculated with those pixel values which are used to form IF set minimal structure accordingly. The pixels from membership and nonmembership images are applied to compute two new oscillatory operators. We introduce four new conditions according to the values of the oscillatory operators, which is used for recognition of the face. This new concept can be applied in different types of real life data analysis such as data mining. Along with the theoretical explanation we also presented some experimental results which proves the precision of the face recognition algorithm. In this paper proposed new face recognition algorithm executed and tested using MATLAB 7.9 software and experiments are performed on Face fix Database and ORL database. Accuracy of the results describes the application of IF set minimal structure oscillation in the field of face recognition.



Extracting hyponymy of domain entity using Cascaded Conditional Random Fields
Abstract
Entity hyponymy is an important semantic relation to build the domain ontology or knowledge graphs. Traditional extraction methods of domain concepts hyponymy are limited to manual annotation or specific patterns. Aiming at this problem, this paper proposed a new method of extracting hypernym–hyponym relations of domain entity with the CCRFs (Cascaded Conditional Random Fields), i.e., a two-layer CRFs model is employed to learn the hyponymy of domain entity concept. The lower-level of the CCRFs model is used to model the words by considering the dependence of long distance among words and identify the domain entity concept, which need to be combined in order. The pairs of entity concept can be obtained on the basis of the definition template characteristics. Then label the semantic pairs of concepts in high-level model by integrating assemblage characteristics and hyponymy demonstratives in feature template, finally identify the hypernym–hyponym relations between domain entities. Experiments on real-world data sets demonstrate the performance of the proposed algorithms.



A hybrid approach for face alignment
Abstract
Face alignment has been an indispensable procedure in face application. It is still a challenge to locate facial landmarks in unconstrained scene. In this paper, we propose an algorithm to perform accurately face alignment. A method based on human retinal information processing principle is proposed to enhance the images and remove the illumination noise. According to the image’s locality principle, the distance constrains is imposed on the pixel difference features around the landmark to achieve robustness. And then, the random forest is used to map the pixel difference features to local binary features. The obtained local binary features are used to jointly learn a linear regression for the final output. In addition, the inherent data structure is utilized to reduce the computational burden when preform maximum variance reduction on the split node in random forest. Extensive experiments on public datasets show that the proposed approach can locate facial landmarks accurately and rapidly.


