Computer and Information Sciences - Publications
Permanent URI for this collection
Browse
Browsing Computer and Information Sciences - Publications by Author "Agarwal, Arun"
Results Per Page
Sort Options
-
ItemA hierarchical learning approach for finding multiple vehicle number plate under cluttered background( 2016-01-01) Kirti, ; Rao, C. Raghavendra ; Wankar, Rajeev ; Agarwal, ArunTraffic control is one of the biggest problems faced by the surveillance department in every country. Manual surveillance supported by well-defined rules and effective equipment is a common solution to this problem. But, often traffic personnel fail to isolate and recognize the number plate and hence to penalize the owner of the vehicle who violated the traffic rules due to the speeding vehicle and mounting of number plates at arbitrary parts of the vehicle. Hence, his inability renders the traffic surveillance a challenging research area. The automatic vehicle number plate recognition system provides one of the solutions to this problem but constrained by various limitations. The most challenging aspect is to detect the number plates itself present in an image. The presence of multiple vehicle number plates and cluttered background makes our work different from earlier approaches. A single step process may not be able to detect all the number plates in an image, hence we propose Hierarchical Filtering (HF) approach which employs several transformation functions on the input image. The proposed HF models the characteristics of vehicle number plates and label them by fitting the Logistic Regression model. The proposed method is able to detect all the number plates present in an image but at the expense of some non-number plate regions in the form of a minimum bounding rectangle. The proposed model is tested on a wide range of inputs, and the results are profiled in terms of precision and recall measures.
-
ItemA hierarchical learning approach for finding multiple vehicle number plate under cluttered background( 2016-01-01) Kirti, ; Rao, C. Raghavendra ; Wankar, Rajeev ; Agarwal, ArunTraffic control is one of the biggest problems faced by the surveillance department in every country. Manual surveillance supported by well-defined rules and effective equipment is a common solution to this problem. But, often traffic personnel fail to isolate and recognize the number plate and hence to penalize the owner of the vehicle who violated the traffic rules due to the speeding vehicle and mounting of number plates at arbitrary parts of the vehicle. Hence, his inability renders the traffic surveillance a challenging research area. The automatic vehicle number plate recognition system provides one of the solutions to this problem but constrained by various limitations. The most challenging aspect is to detect the number plates itself present in an image. The presence of multiple vehicle number plates and cluttered background makes our work different from earlier approaches. A single step process may not be able to detect all the number plates in an image, hence we propose Hierarchical Filtering (HF) approach which employs several transformation functions on the input image. The proposed HF models the characteristics of vehicle number plates and label them by fitting the Logistic Regression model. The proposed method is able to detect all the number plates present in an image but at the expense of some non-number plate regions in the form of a minimum bounding rectangle. The proposed model is tested on a wide range of inputs, and the results are profiled in terms of precision and recall measures.
-
ItemA hierarchical learning approach for finding multiple vehicle number plate under cluttered background( 2016-01-01) Kirti, ; Rao, C. Raghavendra ; Wankar, Rajeev ; Agarwal, ArunTraffic control is one of the biggest problems faced by the surveillance department in every country. Manual surveillance supported by well-defined rules and effective equipment is a common solution to this problem. But, often traffic personnel fail to isolate and recognize the number plate and hence to penalize the owner of the vehicle who violated the traffic rules due to the speeding vehicle and mounting of number plates at arbitrary parts of the vehicle. Hence, his inability renders the traffic surveillance a challenging research area. The automatic vehicle number plate recognition system provides one of the solutions to this problem but constrained by various limitations. The most challenging aspect is to detect the number plates itself present in an image. The presence of multiple vehicle number plates and cluttered background makes our work different from earlier approaches. A single step process may not be able to detect all the number plates in an image, hence we propose Hierarchical Filtering (HF) approach which employs several transformation functions on the input image. The proposed HF models the characteristics of vehicle number plates and label them by fitting the Logistic Regression model. The proposed method is able to detect all the number plates present in an image but at the expense of some non-number plate regions in the form of a minimum bounding rectangle. The proposed model is tested on a wide range of inputs, and the results are profiled in terms of precision and recall measures.
-
ItemA knowledge-based design for structural analysis of printed mathematical expressions( 2014-01-01) Kumar, Pavan ; Agarwal, Arun ; Bhagvati, ChakravarthyRecognition of Mathematical Expressions (MEs) is a challenging Artificial Intelligence problem as MEs have a complex two dimensional structure. ME recognition involves two stages: Symbol recognition and Structural Analysis. Symbols are recognized in the first stage and spatial relationships like superscript, subscript etc., are determined in the second stage. In this paper, we have focused on structural analysis of printed MEs. For structural analysis, we have proposed a novel ternary tree based representation that captures spatial relationships among the symbols in a given ME. Proposed tree structure has been used for validation of generated ME structure. Structure validation process detects errors based on domain knowledge (mathematics) and the error feedback is used to correct the structure. Therefore, our validation process incorporates an intelligent mechanism to automatically detect and correct the errors. Proposed approach has been tested on an image database of 829 MEs collected from various mathematical documents and experimental results are reported on them.
-
ItemA knowledge-based design for structural analysis of printed mathematical expressions( 2014-01-01) Kumar, Pavan ; Agarwal, Arun ; Bhagvati, ChakravarthyRecognition of Mathematical Expressions (MEs) is a challenging Artificial Intelligence problem as MEs have a complex two dimensional structure. ME recognition involves two stages: Symbol recognition and Structural Analysis. Symbols are recognized in the first stage and spatial relationships like superscript, subscript etc., are determined in the second stage. In this paper, we have focused on structural analysis of printed MEs. For structural analysis, we have proposed a novel ternary tree based representation that captures spatial relationships among the symbols in a given ME. Proposed tree structure has been used for validation of generated ME structure. Structure validation process detects errors based on domain knowledge (mathematics) and the error feedback is used to correct the structure. Therefore, our validation process incorporates an intelligent mechanism to automatically detect and correct the errors. Proposed approach has been tested on an image database of 829 MEs collected from various mathematical documents and experimental results are reported on them.
-
ItemA knowledge-based segmentation algorithm for enhanced recognition of handwritten courtesy amounts( 1999-01-01) Hussein, Karim M. ; Agarwal, Arun ; Gupta, Amar ; Wang, Patrick S.P.A knowledge-based segmentation algorithm to enhance recognition of courtesy amounts on bank checks is proposed in this paper. This algorithm uses multiple contextual cues to enhance segmentation and recognition. The system described extracts context from the handwritten numerals and uses a syntax parser based on a deterministic finite automaton to provide adequate feedback to enhance recognition. Further feedback is provided by a simple legal amount decoder that determines word count and recognizes several key words (e.g. thousand and hundred). This provides an additional semantic constraint on the dollar section. The segmentation analysis module presented is capable of handling a number of commonly used styles for courtesy amount representation. Both handwritten and machine written courtesy and legal amounts were utilized to test the efficacy of the preprocessor for the check recognition system described in this paper. The substitution error was reduced by 30-40% depending on the input check mix. © 1999 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.
-
ItemA methodology for high availability of data for business continuity planning / disaster recovery in a grid using replication in a distributed database( 2008-12-01) Chidambaram, J. ; Prabhu, C. S.R. ; Rao, P. A.Narasimha ; Wankar, Rajeev ; Aneesh, C. Sreevallabh ; Agarwal, ArunHigh Availability systems ensure nonstop service delivery despite natural calamities or any other disruptions, whatsoever. In a grid, High Availability can be achieved by offering services of a failed node from any other live node, if redundant storage of data and software of the failed node is made available on that live node. During replication of data in a grid for high availability, the techniques used in distributed databases can be exploited. Data replication can be achieved better by efficient data distribution across the various nodes in the grid. In this paper, a methodology for High Availability of data for Business Continuity Planning/Disaster Recovery (BCP/DR) in a grid environment is proposed, using some of the features of distributed query processing. With this approach OGSA-DQP (Open Grid Services Architecture -Distributed Query Processing) standard can be extended to OGSAHA for high availability.
-
ItemA methodology for high availability of data for business continuity planning / disaster recovery in a grid using replication in a distributed database( 2008-12-01) Chidambaram, J. ; Prabhu, C. S.R. ; Rao, P. A.Narasimha ; Wankar, Rajeev ; Aneesh, C. Sreevallabh ; Agarwal, ArunHigh Availability systems ensure nonstop service delivery despite natural calamities or any other disruptions, whatsoever. In a grid, High Availability can be achieved by offering services of a failed node from any other live node, if redundant storage of data and software of the failed node is made available on that live node. During replication of data in a grid for high availability, the techniques used in distributed databases can be exploited. Data replication can be achieved better by efficient data distribution across the various nodes in the grid. In this paper, a methodology for High Availability of data for Business Continuity Planning/Disaster Recovery (BCP/DR) in a grid environment is proposed, using some of the features of distributed query processing. With this approach OGSA-DQP (Open Grid Services Architecture -Distributed Query Processing) standard can be extended to OGSAHA for high availability.
-
ItemA novel method for training and classification of ballistic and quasi-ballistic missiles in real-time( 2013-12-01) Singh, Upendra Kumar ; Padmanabhan, Vineet ; Agarwal, ArunIn this paper we outline a novel method for classifying ballistic as well as quasi-ballistic missiles using realtime neural network. Fast classification time plays a stellar role for early and prompt action in air-defense scenario. In-order to get the trajectory information of the missile we initially use simulated radar measurements and for final validation real-world radar track is used. Trajectories are segmented to allow small as well as large trajectories to be trained and classified by the same architecture of the neural network. This is needed because ballistic missiles can follow nominal, lofted or depressed trajectory to reach to its target points even when launched from the same point. © 2013 IEEE.
-
ItemA novel method for training and classification of ballistic and quasi-ballistic missiles in real-time( 2013-12-01) Singh, Upendra Kumar ; Padmanabhan, Vineet ; Agarwal, ArunIn this paper we outline a novel method for classifying ballistic as well as quasi-ballistic missiles using realtime neural network. Fast classification time plays a stellar role for early and prompt action in air-defense scenario. In-order to get the trajectory information of the missile we initially use simulated radar measurements and for final validation real-world radar track is used. Trajectories are segmented to allow small as well as large trajectories to be trained and classified by the same architecture of the neural network. This is needed because ballistic missiles can follow nominal, lofted or depressed trajectory to reach to its target points even when launched from the same point. © 2013 IEEE.
-
ItemA rough set based PCM for authorizing grid resources( 2010-12-01) Kaiiali, Mustafa ; Wankar, Rajeev ; Rao, C. R. ; Agarwal, ArunMany existing grid authorization systems adopt an inefficient structure of storing security policies for the available resources. That leads to huge repetitions in checking security rules. One of the efficient mechanisms that handle these repetitions is the Hierarchical Clustering Mechanism (HCM) [1]. HCM reduces the redundancy in checking security rules compared to the Brute Force Approach as well as the Primitive Clustering Mechanism (PCM). Further enhancement of HCM is done to make it suitable for dynamic environments [2]. However, HCM is not totally free from repetitions. Moreover, HCM is an expensive process in terms of decision tree size and memory consuming. In this paper, a new Rough Set based PCM is proposed which increases the efficiency of the authorization process and further reduces the redundancy. © 2010 IEEE.
-
ItemA rough set based PCM for authorizing grid resources( 2010-12-01) Kaiiali, Mustafa ; Wankar, Rajeev ; Rao, C. R. ; Agarwal, ArunMany existing grid authorization systems adopt an inefficient structure of storing security policies for the available resources. That leads to huge repetitions in checking security rules. One of the efficient mechanisms that handle these repetitions is the Hierarchical Clustering Mechanism (HCM) [1]. HCM reduces the redundancy in checking security rules compared to the Brute Force Approach as well as the Primitive Clustering Mechanism (PCM). Further enhancement of HCM is done to make it suitable for dynamic environments [2]. However, HCM is not totally free from repetitions. Moreover, HCM is an expensive process in terms of decision tree size and memory consuming. In this paper, a new Rough Set based PCM is proposed which increases the efficiency of the authorization process and further reduces the redundancy. © 2010 IEEE.
-
ItemA rough set based PCM for authorizing grid resources( 2010-12-01) Kaiiali, Mustafa ; Wankar, Rajeev ; Rao, C. R. ; Agarwal, ArunMany existing grid authorization systems adopt an inefficient structure of storing security policies for the available resources. That leads to huge repetitions in checking security rules. One of the efficient mechanisms that handle these repetitions is the Hierarchical Clustering Mechanism (HCM) [1]. HCM reduces the redundancy in checking security rules compared to the Brute Force Approach as well as the Primitive Clustering Mechanism (PCM). Further enhancement of HCM is done to make it suitable for dynamic environments [2]. However, HCM is not totally free from repetitions. Moreover, HCM is an expensive process in terms of decision tree size and memory consuming. In this paper, a new Rough Set based PCM is proposed which increases the efficiency of the authorization process and further reduces the redundancy. © 2010 IEEE.
-
ItemA rule-based approach to form mathematical symbols in printed mathematical expressions( 2011-12-26) Kumar, P. Pavan ; Agarwal, Arun ; Bhagvati, ChakravarthyAutomated understanding of mathematical expressions (MEs) is currently a challenging task due to their complex two- dimensional (2D) structure. Recognition of MEs can be online or offline and in either case, the process involves symbol recognition and analysis of 2D structure. This process is more complex for offline or printed MEs as they do not have temporal information. In our present work, we focus on the recognition of printed MEs and assume connected components (ccs) of a given ME image are labelled. Our approach to ME recognition comprises three stages,namely symbol formation, structural analysis and generation of encoding form like LATEX. In this paper, we present symbol formation process, where multi-cc symbols (like =, ≡ etc.) are formed, identity of context-dependent symbols (like a horizontal line can be MINUS, OVERBAR, FRACTION etc.) are resolved using spatial relations. Multi-line MEs like matrices and enumerated functions are also handled in this stage. A rule-based approach is proposed for the purpose, where the heuristics based on spatial relations are represented in the form of rules (knowledge) and those rules are fired depending on input data (labelled ccs). As knowledge is isolated from data like an expert system in our approach, it allows for easy adaptability and extensibility of the process. Proposed approach also handles both single-line and multi-line MEs in an unified manner. Our approach has been tested on around 800 MEs collected from various mathematical documents and experimental results are reported on them. © 2011 Springer-Verlag.
-
ItemA rule-based approach to form mathematical symbols in printed mathematical expressions( 2011-12-26) Kumar, P. Pavan ; Agarwal, Arun ; Bhagvati, ChakravarthyAutomated understanding of mathematical expressions (MEs) is currently a challenging task due to their complex two- dimensional (2D) structure. Recognition of MEs can be online or offline and in either case, the process involves symbol recognition and analysis of 2D structure. This process is more complex for offline or printed MEs as they do not have temporal information. In our present work, we focus on the recognition of printed MEs and assume connected components (ccs) of a given ME image are labelled. Our approach to ME recognition comprises three stages,namely symbol formation, structural analysis and generation of encoding form like LATEX. In this paper, we present symbol formation process, where multi-cc symbols (like =, ≡ etc.) are formed, identity of context-dependent symbols (like a horizontal line can be MINUS, OVERBAR, FRACTION etc.) are resolved using spatial relations. Multi-line MEs like matrices and enumerated functions are also handled in this stage. A rule-based approach is proposed for the purpose, where the heuristics based on spatial relations are represented in the form of rules (knowledge) and those rules are fired depending on input data (labelled ccs). As knowledge is isolated from data like an expert system in our approach, it allows for easy adaptability and extensibility of the process. Proposed approach also handles both single-line and multi-line MEs in an unified manner. Our approach has been tested on around 800 MEs collected from various mathematical documents and experimental results are reported on them. © 2011 Springer-Verlag.
-
ItemA scalable spatial anisotropic interpolation approach for object removal from images using elastic net regularization( 2016-01-01) Raghava, M. ; Agarwal, Arun ; Rao, C. RaghavendraObject removal from an image is a novel problem with a lot of applications, in the area of computer vision. The ill-posed nature of the problem and the non-stationary content present in the image render it a complicated task. The diffusion-based and self-similarity based algorithms available in the literature explicitly model either the structures or the textures but not the both. They are good at solving small instances of the problem. However, they tend to produce low fidelity results and turn out to be intractable if the relative size of the object to the input image increases. The moving average based Spatial Anisotropic Interpolation (SAI) for text removal, proposed in our previous work also failed due to its poor extrapolation capability. Thus, it is imperative to develop a sampling scheme which can retain the interpolation feature while showing an apposite concern to the non-stationary features present in the image. The proposed, Design of Computer Experiments (DACE) driven Scalable SAI (SSAI) is a natural extension of SAI in three aspects. Precisely, it extends the Systematic Sampling to ‘Not only Symmetric Hierarchical Sampling’ (NoSHS), intelligently selects a basis based on Hurst Exponent, and employs Elastic Net regularization of Gaussian regression error for determining the order of the polynomial. Hence, these adaptive features increase the fidelity of the results. This paper elaborates the proposed framework- SSAI and demonstrates its capabilities by comparing the results with the latest hybrid approaches using the PSNR metric.
-
ItemA scalable spatial anisotropic interpolation approach for object removal from images using elastic net regularization( 2016-01-01) Raghava, M. ; Agarwal, Arun ; Rao, C. RaghavendraObject removal from an image is a novel problem with a lot of applications, in the area of computer vision. The ill-posed nature of the problem and the non-stationary content present in the image render it a complicated task. The diffusion-based and self-similarity based algorithms available in the literature explicitly model either the structures or the textures but not the both. They are good at solving small instances of the problem. However, they tend to produce low fidelity results and turn out to be intractable if the relative size of the object to the input image increases. The moving average based Spatial Anisotropic Interpolation (SAI) for text removal, proposed in our previous work also failed due to its poor extrapolation capability. Thus, it is imperative to develop a sampling scheme which can retain the interpolation feature while showing an apposite concern to the non-stationary features present in the image. The proposed, Design of Computer Experiments (DACE) driven Scalable SAI (SSAI) is a natural extension of SAI in three aspects. Precisely, it extends the Systematic Sampling to ‘Not only Symmetric Hierarchical Sampling’ (NoSHS), intelligently selects a basis based on Hurst Exponent, and employs Elastic Net regularization of Gaussian regression error for determining the order of the polynomial. Hence, these adaptive features increase the fidelity of the results. This paper elaborates the proposed framework- SSAI and demonstrates its capabilities by comparing the results with the latest hybrid approaches using the PSNR metric.
-
ItemA soft approach for feature selection and recognition of outdoor natural images( 2013-11-22) Dhariwal, Tarun ; Agarwal, Arun ; Rao, C. R.For applications like scene classification, CBIR, automated tagging, motion of a robot in a real world using vision system, understanding the images captured by the visual sensors is quite important. For such high level processing task it is mandatory to have a set of good feature. This paper discusses the application of fuzzy rough set theory for selecting a set of best features from the feature set of regions which are segmented out from outdoor natural images using a segmentation algorithm. The paper explains a complete system i.e. from feature selection/region labelling based on fuzzy rough set theory. The images used in experiments are color-texture natural images taken from the campus of University of Hyderabad. © 2013 IEEE.
-
ItemA soft approach for feature selection and recognition of outdoor natural images( 2013-11-22) Dhariwal, Tarun ; Agarwal, Arun ; Rao, C. R.For applications like scene classification, CBIR, automated tagging, motion of a robot in a real world using vision system, understanding the images captured by the visual sensors is quite important. For such high level processing task it is mandatory to have a set of good feature. This paper discusses the application of fuzzy rough set theory for selecting a set of best features from the feature set of regions which are segmented out from outdoor natural images using a segmentation algorithm. The paper explains a complete system i.e. from feature selection/region labelling based on fuzzy rough set theory. The images used in experiments are color-texture natural images taken from the campus of University of Hyderabad. © 2013 IEEE.
-
ItemA string matching based algorithm for performance evaluation of mathematical expression recognition( 2014-01-01) Pavan Kumar, P. ; Agarwal, Arun ; Bhagvati, ChakravarthyIn this paper, we have addressed the problem of automated performance evaluation of Mathematical Expression (ME) recognition. Automated evaluation requires that recognition output and ground truth in some editable format like [InlineMediaObject not available: see fulltext.], MathML, etc. have to be matched. But standard forms can have extraneous symbols or tags. For example, < mo > tag is added for an operator in MathML and \begin{array} is used to encoded matrices in [InlineMediaObject not available: see fulltext.]. These extraneous symbols are also involved in matching that is not intuitive. For that, we have proposed a novel structure encoded string representation that is independent of any editable format. Structure encoded strings retain the structure (spatial relationships like superscript, subscript, etc.) and do not contain any extraneous symbols. As structure encoded strings give the linear representation of MEs, Levenshtein edit distance is used as a measure for performance evaluation. Therefore, in our approach, recognition output and ground truth in [InlineMediaObject not available: see fulltext.] form are converted to their corresponding structure encoded strings and Levenshtein edit distance is computed between them. © 2014 Indian Academy of Sciences.