Rotation-Invariant Features based on Steerable Filter Banks for the Distributed Image Classification Problem.


Baltasar Beferull-Lozano


Feature extraction and matching are very important components in classification and retrieval applications. Basically, most of the well-known texture feature extraction methods measure the energies of the subbands obtained from a wavelet transform as texture discriminating features. One drawback of using critically sampled transforms for this purpose is that the features are not rotation or shift invariant. In this work, we address the problem of designing efficient rotation-invariant texture features and demonstrate their use in the context of a decision tree classifier. Our goal here is to enable locating similar images in the database, even if the image captured is rotated with respect to those most similar to it in the database. For this, the concept of angular alignment in the feature space has to be defined.

Main contributions

We propose a new rotation-invariant image retrieval system based on steerable pyramids and the concept of angular alignment across scales. First, we define energy-based texture features which are steerable under rotation, i.e., such that features corresponding to the rotated version of an image can be easily obtained from the features of the original (non-rotated) image. We also propose an approach to measure similarity between images that is robust to rotation; images are compared after being aligned in angle. The retrieval process is performed by means of a Decision Tree Classifier where the angular alignment is performed at each node in the tree. To demonstrate the effectiveness of our system we consider a distributed image classification system, where the feature encoder and the classifier are physically apart and thus features are compressed before being transmitted. Our results of retrieval performance versus rate show a clear gain with respect to a wavelet transform (as an example, for the same rate, the retrieval precision is increased from 40% to 65%).

Current and Future Research

Our current work is focusing on introducing further dependencies of features across scales as well as analyzing the problem of Multiple Description Coding for distributed classification through a network with losses.


Hua Xie (University of Southern California), Antonio Ortega (University of Southern California).


U.S. National Science Foundation (grant MIP-9804959), NASA (grant AIST-0122-0005), Swiss National Competence Center in Research for Mobile Information and Communication Systems.

Major publications


B. Beferull-Lozano, H. Xie and A. Ortega, Rotation-Invariant Features Based on Steerable Transforms With an Application to Distributed Image Classification, Proc. IEEE International Conference on Image Processing, Vol. 3, pp. 521-4, 2003.
[detailed record] [bibtex]