Towards an Automated Biodiversity Modelling Process for Forest Animals using Uncrewed Aerial Vehicles

  • Christoph Praschl 
  • David Schedl
  • Research and Development Department, University of Applied Sciences Upper Austria
  • Institute of Computational Perception, Johannes Kepler University Linz, Austria
Cite as
Praschl, C. and Schedl, D. (2023). Towards an Automated Biodiversity Modelling Process
for Forest Animals using Uncrewed Aerial Vehicles. Proceedings of the 11th International Workshop on Simulation for Energy, Sustainable Development & Environment (SESDE 2023). DOI: https://doi.org/10.46354/i3m.2023.sesde.002

Abstract

Climate change poses a grave threat to habitats such as forests, endangering the integrity and biodiversity of the global flora and fauna. Accurate surveying techniques are crucial tomodel populations, detect over and under populations, and address them accordingly. This work proposes a process for creating a biodiversitymodel of a forest’s fauna using uncrewed aerial vehicles equipped with RGB and thermal cameras. Real-world data, combined with computer-generated imagery and artificial intelligencemodels, will allow training suitable computer visionmodels. Thesemodels will serve as a reliable and objective data source, enabling the creation of statistical models to describe themonitored forests’ conditions and the biodiversity of its fauna. The proposedmethodology is expected to have significant implications for conservation efforts. It should represent a reliable and efficient way tomonitor and evaluate forest ecosystems, identifying areas of concern and prioritizing conservation efforts. By providing a comprehensive understanding of the biodiversity within a forest, it could help policymakers make informed decisions and develop effective conservation strategies. Ultimately, this work aims to contribute to the preservation of our planet’s biodiversity and the protection of its habitats.

Biodiversity | Model | Forest | Drone | Animal | Fauna

References

  1. Andrew,M. E. and Shephard, J.M. (2017). Semi-automated detection of eagle nests: an application of very highresolution image data and advanced image analyses to wildlife surveys. Remote Sensing in Ecology and Conservation, 3(2):66–80.
  2. Andrew, W., Greatwood, C., and Burghardt, T. (2019). Aerial animal biometrics: Individual friesian cattle recovery and visual identification via an autonomous uav with onboard deep inference. In 2019 IEEE/RSJ InternationalConferenceon IntelligentRobotsand Systems (IROS), pages 237–243. IEEE.
  3. Barbedo, J. G. A., Koenigkan, L. V., Santos, T. T., and Santos, P. M. (2019). A study on the detection of cattle in uav images using deep learning. Sensors, 19(24):5436.
  4. Beaver, J. T., Baldwin, R.W.,Messinger,M., Newbolt, C. H., Ditchkoff, S. S., and Silman,M. R. (2020). Evaluating the use of drones equipped with thermal sensors as an effectivemethod for estimating wildlife. Wildlife Society Bulletin, 44(2):434–443.
  5. Bimber, O., Kurmi, I., and Schedl, D. C. (2019). Synthetic aperture imaging with drones. IEEE computer graphics and applications, 39(3):8–15.
  6. Bondi, E., Dey, D., Kapoor, A., Piavis, J., Shah, S., Fang, F., Dilkina, B., Hannaford, R., Iyer, A., Joppa, L., et al. (2018). Airsim-w: A simulation environment for wildlife conservation with uavs. In Proceedings of the 1st ACMSIGCAS Conference on Computing and Sustainable Societies, pages 1–12.
  7. Caron,M., Touvron, H., Misra, I., Jégou, H.,Mairal, J., Bojanowski, P., and Joulin, A. (2021). Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9650–9660.
  8. Ceballos, G., Ehrlich, P. R., Barnosky, A. D., García, A., Pringle, R. M., and Palmer, T. M. (2015). Accelerated modern human–induced species losses: Entering the sixth mass extinction. Science advances, 1(5):e1400253.
  9. Choi, J., Kim, T., and Kim, C. (2019). Self-ensembling with gan-based data augmentation for domain adaptation in semantic segmentation. In Proceedings of the IEEE/CVF InternationalConferenceonComputer Vision, pages 6830–6840.
  10. Christiansen, P., Steen, K. A., Jørgensen, R. N., and Karstoft, H. (2014). Automated detection and recognition of wildlife using thermal cameras. Sensors,14(8):13778–13793.
  11. Colglazier, W. (2015). Sustainable development agenda: 2030. Science, 349(6252):1048–1050.
  12. Creswell, A., White, T., Dumoulin, V., Arulkumaran, K., Sengupta, B., and Bharath, A. A. (2018). Generative adversarial networks: An overview. IEEE signal processing magazine, 35(1):53–65.
  13. Dickens, J., Hollyman, P. R., Hart, T., Clucas, G. V., Murphy, E. J., Poncet, S., Trathan, P. N., and Collins, M. A. (2021). Developing uavmonitoring of south georgia and the south sandwich islands’ iconic land-based marine predators. Frontiers inMarine Science, 8:654215.
  14. Dosovitskiy, A., Beyer, L., Kolesnikov, A.,Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani,M., Minderer,M., Heigold, G., Gelly, S., et al. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.
  15. Eppel, S., Xu, H., Bismuth, M., and Aspuru-Guzik, A. (2020). Computer vision for recognition of materials and vessels in chemistry lab settings and the vectorlabpics data set. ACS central science, 6(10):1743–1752.
  16. Esser, P., Rombach, R., and Ommer, B. (2021). Taming transformers for high-resolution image synthesis. In Proceedings of the IEEE/CVFconferenceoncomputer vision and pattern recognition, pages 12873–12883.
  17. Feng, J.-C., Hong, F.-T., and Zheng, W.-S. (2021). Mist: Multiple instance self-training framework for video anomaly detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14009–14018.
  18. Fretwell, P. T., Staniland, I. J., and Forcada, J. (2014). Whales from space: counting southern right whales by satellite. PloS one, 9(2):e88655.
  19. Fujisada, H., Urai,M., and Iwasaki, A. (2012). Technical methodology for aster global dem. IEEE Transactions on Geoscience and Remote Sensing, 50(10):3725–3736.
  20. geoland.at (2021). Digitales Geländemodell (DGM) österreich - data.gv.at. https://www.data.gv.at/katalog/dataset/d88a1246-9684-480b-a480-ff63286b35b7. (Accessed on 01/24/2022).
  21. Gesch, D., Oimoen, M., Greenlee, S., Nelson, C., Steuck, M., and Tyler, D. (2002). The national elevation dataset. Photogrammetric engineering and remote sensing, 68(1):5–32.
  22. Grier, J.W., Gerrard, J.M., Hamilton, G. D., and Gray, P. A. (1981). Aerial-visibility bias and survey techniques for nesting bald eagles in northwestern ontario. The Journal of WildlifeManagement, pages 83–92.
  23. Guerra,W., Tal, E.,Murali, V., Ryou, G., and Karaman, S. (2019). Flightgoggles: Amodular framework for photorealistic camera, exteroceptive sensor, and dynamics simulation. arXiv preprint arXiv:1905.11377.
  24. Guirado, E., Tabik, S., Rivas,M. L., Alcaraz-Segura,D., and Herrera, F. (2019). Whale counting in satellite and aerial images with deep learning. Scientific reports, 9(1):14259.
  25. Harris, G., Thompson, R., Childs, J. L., and Sanderson, J. G. (2010). Automatic storage and analysis of camera trap data. Bulletin of the Ecological Society of America, 91(3):352–360.
  26. Henrich,M., Franke, F., Peterka, T., Bödeker, K., Červenka, J., Ebert, C., Franke, U., Zenáhlíková, J., Star`y,M., Peters, W., et al. (2021). Future perspectives for themonitoring of red deer populations–a case study of a transboundary population in the bohemian forest ecosystem. Silva Gabreta, 27:161–92.
  27. Hirsch, T.,Mooney, K., and Cooper, D. (2020). Global biodiversity outlook 5. Secretariat of the Convention on Biological Diversity.
  28. Ho, J., Jain, A., and Abbeel, P. (2020). Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840–6851.
  29. Huang, S.-W., Lin, C.-T., Chen, S.-P.,Wu, Y.-Y., Hsu, P.-H., and Lai, S.-H. (2018). Auggan: Cross domain adaptation with gan-based data augmentation. In Proceedingsof the European Conference on Computer Vision (ECCV), pages 718–731.
  30. Huynh, T., Kornblith, S., Walter, M. R., Maire, M., and Khademi, M. (2022). Boosting contrastive selfsupervised learning with false negative cancellation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 2785–2795.
  31. Israel,M. and Reinhard, A. (2017). Detecting nests of lapwing birds with the aid of a small unmanned aerial vehicle with thermal camera. In 2017 International Conference on Unmanned Aircraft Systems (ICUAS), pages 1199–1207. IEEE.
  32. Katharopoulos, A., Vyas, A., Pappas, N., and Fleuret, F. (2020). Transformers are rnns: Fast autoregressive transformers with linear attention. In International Conference onMachine Learning, pages 5156–5165. PMLR.
  33. Kellenberger, B.,Marcos, D., Lobry, S., and Tuia, D. (2019). Half a percent of labels is enough: Efficient animal detection in uav imagery using deep cnns and active learning. IEEE Transactions on Geoscience and Remote Sensing, 57(12):9524–9533.
  34. Kellenberger, B.,Marcos, D., and Tuia, D. (2018). Detecting mammals in uav images: Best practices to address a substantially imbalanced dataset with deep learning. Remote sensing of environment, 216:139–153.
  35. Kurmi, I., Schedl, D. C., and Bimber, O. (2019a). A statistical view on synthetic aperture imaging for occlusion removal. IEEE Sensors Journal, 19(20):9374–9383.
  36. Kurmi, I., Schedl, D. C., and Bimber, O. (2019b). Thermal airborne optical sectioning. Remote Sensing, 11(14):1668.
  37. Levoy,M. and Hanrahan, P. (1996). Light field rendering. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pages 31–42.
  38. Li, C., Li, L., Jiang, H.,Weng, K., Geng, Y., Li, L., Ke, Z., Li, Q., Cheng, M., Nie, W., et al. (2022). Yolov6: A singlestage object detection framework for industrial applications. arXiv preprint arXiv:2209.02976.
  39. Loquercio, A., Kaufmann, E., Ranftl, R., Müller,M., Koltun, V., and Scaramuzza, D. (2021). Learning high-speed flight in the wild. Science Robotics, 6(59):eabg5810.
  40. Ma, H. and Zhang, L. (2022). Attention-based framework for weakly supervised video anomaly detection. The Journal of Supercomputing, 78(6):8409–8429.
  41. Marchowski, D. (2021). Drones, automatic counting tools, and artificial neural networks in wildlife population censusing. Ecology and evolution, 11(22):16214.
  42. Marcot, B. G., Singleton, P.H., and Schumaker,N.H. (2015). Analysis of sensitivity and uncertainty in an individualbased model of a threatened wildlife species. Natural Resource Modeling, 28(1):37–58.
  43. Mildenhall, B., Srinivasan, P. P., Tancik,M., Barron, J. T., Ramamoorthi, R., and Ng, R. (2020). Nerf: Representing scenes as neural radiance fields for view synthesis. In ECCV.
  44. Murie, O. J. and Elbroch,M. (2005). A field guide to animal tracks, volume 3. Houghton Mifflin Harcourt.
  45. Nathan, R., Kurmi, I., Schedl, D. C., and Bimber, O. (2022). Through-foliage tracking with airborne optical sectioning. Journal of Remote Sensing, 2022.
  46. Norouzzadeh, M. S., Nguyen, A., Kosmala, M., Swanson, A., Palmer, M. S., Packer, C., and Clune, J. (2018). Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning. Proceedings of the National Academy of Sciences, 115(25):E5716–E5725.
  47. Öztürk, H. İ. and Can, A. B. (2021). Adnet: Temporal anomaly detection in surveillance videos. In International Conference on Pattern Recognition, pages 88–101. Springer.
  48. Perkins, A. J., Bingham, C. J., and Bolton,M. (2018). Testing the use of infra-red video cameras to census a nocturnal burrow-nesting seabird, the european storm petrel hydrobates pelagicus. Ibis, 160(2):365–378.
  49. Pörtner, H.-O., Roberts, D. C., Adams, H., Adler, C., Aldunce, P., Ali, E., Begum, R. A., Betts, R., Kerr, R. B., Biesbroek, R., et al. (2022). Climate change 2022: Impacts, adaptation and vulnerability. IPCC Sixth Assessment Report.
  50. Praschl., C., Kaiser., R., andZwettler., G. (2023). Generative adversarial network synthesis for improved deep learning model training of alpine plants with fuzzy structures. In Proceedings of the 18th International JointConference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP,, pages 151–158. INSTICC, SciTePress.
  51. Rey, N., Volpi,M., Joost, S., and Tuia, D. (2017). Detecting animals in african savanna with uavs and the crowds. Remote Sensing of Environment, 200:341–351.
  52. Richardson, E., Sela,M., and Kimmel, R. (2016). 3d face reconstruction by learning fromsynthetic data. In 2016 fourth international conference on 3D vision (3DV), pages 460–469. IEEE.
  53. Ritter,N. and Ruth,M. (1997). The geotiff data interchange standard for raster geographic images. International Journal of Remote Sensing, 18(7):1637–1647.
  54. Rocco, C. (2002). Synthetic Dataset Creation For Computer Vision Application: Pipeline Proposal. Dissertation, Pontifical Catholic University of Paraná.
  55. Sajjan, S.,Moore,M., Pan,M., Nagaraja, G., Lee, J., Zeng, A., and Song, S. (2020). Clear grasp: 3d shape estimation of transparent objects for manipulation. In 2020 IEEE InternationalConferenceonRoboticsandAutomation (ICRA), pages 3634–3642. IEEE.
  56. Sampath, V.,Maurtua, I., AguilarMartín, J. J., and Gutierrez, A. (2021). A survey on generative adversarial networks for imbalance problems in computer vision tasks. Journal of big Data, 8(1):1–59.
  57. Schedl, D. C., Kurmi, I., and Bimber, O. (2020). Airborne optical sectioning for nesting observation. Scientific reports, 10(1):1–7.
  58. Schedl, D. C., Kurmi, I., and Bimber, O. (2021). An autonomous drone for search and rescue in forests using airborne optical sectioning. Science Robotics, 6(55):eabg1188.
  59. Scholten, C., Kamphuis, A., Vredevoogd, K., Lee- Strydhorst, K., Atma, J., Shea, C., Lamberg, O., and Proppe, D. (2019). Real-time thermal imagery from an unmanned aerial vehicle can locate ground nests of a grassland songbird at rates similar to traditionalmethods. Biological Conservation, 233:241–246.
  60. Schramm, S., Ebert, J., Rangel, J., Schmoll, R., and Kroll, A. (2021). Iterative feature detection of a coded checkerboard target for the geometric calibration of infrared cameras. Journal of Sensors and Sensor Systems, 10(2):207–218.
  61. Shafaei, A., Little, J. J., and Schmidt,M. (2016). Play and learn: Using video games to train computer visionmodels. arXiv preprint arXiv:1608.01745.
  62. Sharma, S., Sato, K., and Gautam, B. P. (2022). Bioacousticsmonitoring of wildlife using artificial intelligence: A methodological literature review. In 2022 International Conference on Networking and Network Applications (NaNA), pages 1–9. IEEE.
  63. Shukla, P. R., Skeg, J., Buendia, E. C.,Masson-Delmotte, V., Pörtner, H.-O., Roberts, D., Zhai, P., Slade, R., Connors, S., Van Diemen, S., et al. (2019). Climate change and land: an ipcc special report on climate change, desertification, land degradation, sustainable land management, food security, and greenhouse gas fluxes in terrestrial ecosystems.
  64. Siddi,M. (2020). The european green deal: Asseasing its current state and future implementation.
  65. Silveira, L., Jácomo, A. T., and Diniz-Filho, J. A. F. (2003). Camera trap, line transect census and track surveys: a comparative evaluation. Biological conservation, 114(3):351–355.
  66. Song, Y., Naji, S., Kaufmann, E., Loquercio, A., and Scaramuzza, D. (2021). Flightmare: A flexible quadrotor simulator. In Proceedings of the 2020 Conference on Robot Learning, pages 1147–1157.
  67. Sophian, A., Sediono,W., Salahudin,M. R., Shamsuli,M. S. M., and Zaaba, D. Q. A. (2017). Evaluation of 3ddistance measurement accuracy of stereo-vision systems. International Journal of Applied Engineering Research, 12(16):5946–5951.
  68. Tian, Y., Pang, G., Chen, Y., Singh, R., Verjans, J. W., and Carneiro, G. (2021). Weakly-supervised video anomaly detection with robust temporal feature magnitude learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4975–4986.
  69. Xue, M., Greenslade, T., Mirmehdi, M., and Burghardt, T. (2022). Small or far away? exploiting deep superresolution and altitude data for aerial animal surveillance. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 509–519.
  70. Zwettler, G. A., Holmes III, D. R., and Backfrieder, W. (2020). Strategies for training deep learning models inmedical domains with small reference datasets.