Comparing classification methods for the building plan components

  • Daniel Wilfing 
  • Oliver Krauss 
  • VIEW Promotion GmbH, Frauscherberg 8, Friedburg 5211, Austria
  • 2Research Group for Advanced Information Systems and Technology (AIST), Research and Development  Department, University of Applied Sciences Upper Austria, Softwarepark 11, 4232 Hagenberg, Austria
Cite as
Wilfing D., Krauss O. (2020). Comparing classification methods for the building plan components. Proceedings of the 32nd European Modeling & Simulation Symposium (EMSS 2020), pp. 154-160. DOI: https://doi.org/10.46354/i3m.2020.emss.021

Abstract

The classification methods Histogram of Oriented Gradients, Bag of Features, Support Vector Machines and Neural Networks are evaluated to find a fitting solution for the automatic classification of building plan components. These components feature shapes with little features and minor differences. After processing the building plans for the classification, feature analysis methods, as well as machine learning based approaches, are tested. The results of the classification methods are compared and the behaviors of the classification methods are analyzed. First results have shown, that neural network classification using line data extracted via Hough transformation and additional calculations surpass other classification methods tested in this work. It was found that the basic structure of building plan components can be detected with neural networks, but further improvements have to be made, if only a single classification process is to be relied on. In the future this work will be used to create 3D building models from 2D plans and enable agent based simulation in the models.

References

  1. Bishop, C. M. (1995). Neural Networks for Pattern Recognition. Oxford University Press, Inc., New York, NY, USA
  2. Cortes, C. and Vapnik, V. (1995). Support-vector networks. Machine Learning, 20(3):273–297
  3. Csurka, G., Dance, C. R., Fan, L., Willamowski, J., and Bray, C. (2004). Visual categorization with bags of keypoints. In In Workshop on Statistical Learning in Computer Vision, ECCV, pages 1–22
  4. Dalal, N. and Triggs, B. (2005). Histograms of oriented gradients for human detection. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), volume 1, pages 886–893 vol. 1.
  5. Fei-Fei, L. and Perona, P. (2005). A bayesian hierarchical model for learning natural scene categories. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), volume 2, pages 524–531 vol. 2.
  6. Friedman, J., Hastie, T., and Tibshirani, R. (2000). Additive logistic regression: a statistical view of boosting (with discussion and a rejoinder by the authors). Ann. Statist., 28(2):337–407.
  7. Hinton, G. E. and Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507
  8. Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. (2012). Improving neural networks by preventing co-adaptation of feature detectors. CoRR, abs/1207.0580.
  9. Hough, P. V. C. (1962). Method and means for recognizing complex patterns.
  10. Hsu, C.-W. and Lin, C.-J. (2002). A comparison of methods for multiclass support vector machines. IEEE Transactions on Neural Networks, 13(2):415–425.
  11. Jin, C. and Wang, L. (2012). Dimensionality dependent pac-bayes margin bound. In Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1, NIPS’12, pages 1034–1042, USA. Curran Associates Inc.
  12. Kolen, J. F. and Kremer, S. C. (2001). Gradient Flow in Recurrent Nets: The Difficulty of Learning LongTerm Dependencies. IEEE.
  13. Lawrence, S., Giles, C. L., and Tsoi, A. C. (1996). What size neural network gives optimal generalization? convergence properties of backpropagation. Technical report, University of Maryland.
  14. Lazebnik, S., Schmid, C., and Ponce, J. (2006). Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), volume 2, pages 2169–2178.
  15. Lu, D. and Weng, Q. (2007). A survey of image classification methods and techniques for improving classification performance. International Journal of Remote Sensing, 28(5):823–870.
  16. Luo, Z., Chen, J., Takiguchi, T., and Ariki, Y. (2015). Rotation-invariant histograms of oriented gradients for local patch robust representation. In 2015 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), pages 196–199.
  17. Nowak, E., Jurie, F., and Triggs, B. (2006). Sampling strategies for bag-of-features image classification. In Leonardis, A., Bischof, H., and Pinz, A., editors, Computer Vision – ECCV 2006, pages 490–503, Berlin, Heidelberg. Springer Berlin Heidelberg.
  18. Ojha, V. K., Abraham, A., and Snásel, V. (2017). Metaheuristic design of feedforward neural networks: A review of two decades of research. CoRR, abs/1705.05584
  19. Opitz, D. W. and Shavlik, J. W. (1996). Generating accurate and diverse members of a neural-network ensemble. In Advances in Neural Information Processing Systems, pages 535–541. MIT Press. 
  20. Sauvola, J. and Pietikäinen, M. (2000). Adaptive document image binarization. Pattern Recognition, 33(2):225 – 236.
  21. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. (2014). Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958.
  22. Wonka, P., Wimmer, M., Sillion, F., and Ribarsky, W. (2003). Instant architecture. ACM Trans. Graph., 22(3):669–677.