Skip to main content

CycleGAN-Based Image Translation for Near-Infrared Camera-Trap Image Recognition

  • Conference paper
  • First Online:
Pattern Recognition and Artificial Intelligence (ICPRAI 2020)

Abstract

Due to its invisibility, NIR (Near-infrared) flash has been widely used to capture the images of wild animals in the night. Although the animals can be captured without notice, the gray NIR images are short of color and texture information and thus is difficult to analyze, for both human and machine. In this paper, we propose to use CycleGAN (Generative Adversarial Networks) to translate NIR image to the incandescent domain for visual quality enhancement. Example translations show that both color and texture can be well recovered by the proposed CycleGAN model. The recognition performance of a SSD based detector on the translated incandescent images is also significantly better than that on the original NIR images. Taking Wildebeest and Zebra for example, an increase of \(16\%\) and \(8\%\) in recognition accuracy has been observed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Arjovsky, M., Bottou, L.: Towards Principled Methods for Training Generative Adversarial Networks, January 2017. http://arxiv.org/abs/1701.04862

  2. Burghardt, T., Calic, J., Thomas, B.T.: Tracking animals in wildlife videos using face detection. In: EWIMT (2004)

    Google Scholar 

  3. Chen, G., Han, T.X., He, Z., Kays, R., Forrester, T.: Deep convolutional neural network based species recognition for wild animal monitoring. In: 2014 IEEE International Conference on Image Processing (ICIP), pp. 858–862, October 2014. https://doi.org/10.1109/ICIP.2014.7025172

  4. Gomez, A., Diez, G., Salazar, A., Diaz, A.: Animal identification in low quality camera-trap images using very deep convolutional neural networks and confidence thresholds. In: Bebis, G., et al. (eds.) ISVC 2016. LNCS, vol. 10072, pp. 747–756. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-50835-1_67

    Chapter  Google Scholar 

  5. Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems 27 (2014)

    Google Scholar 

  6. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition, pp. 770–778, June 2016. https://doi.org/10.1109/CVPR.2016.90

  7. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1, NIPS 2012, pp. 1097–1105. Curran Associates Inc., USA (2012). http://dl.acm.org/citation.cfm?id=2999134.2999257

  8. LeCun, Y., Bottou, L., Bengio, Y., Patrick, H.: Gradient-based learning applied to document recognition. Proc. IEEE 45 (1998). https://doi.org/10.1007/BF00774006

  9. Mirza, M., Osindero, S.: Conditional Generative Adversarial Nets. arXiv:1411.1784 (2014)

  10. Norouzzadeh, M.S., et al.: Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning. Proc. Natl. Acad. Sci. 115(25), E5716–E5725 (2018). https://doi.org/10.1073/pnas.1719367115. https://www.pnas.org/content/115/25/E5716

    Article  Google Scholar 

  11. Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional GANs. In: International Conference on Learning Representations (2016). https://doi.org/10.1051/0004-6361/201527329

  12. Ramanan, D., Forsyth, D.A., Barnard, K.: Detecting, localizing and recovering kinematics of textured animals. In: Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005) - Volume 2 - Volume 02, CVPR 2005, pp. 635–642. IEEE Computer Society, Washington, D.C. (2005). https://doi.org/10.1109/CVPR.2005.126

  13. Schneider, S., Taylor, G.W., Kremer, S.: Deep learning object detection methods for ecological camera trap data, pp. 321–328, May 2018. https://doi.org/10.1109/CRV.2018.00052

  14. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556, September 2014

  15. Swanson, A., Kosmala, M., Lintott, C., Simpson, R., Smith, A., Packer, C.: Snapshot Serengeti, high-frequency annotated camera trap images of 40 mammalian species in an African savanna. Sci. Data 2, 150026 EP (2015). https://doi.org/10.1038/sdata.2015.26, Data Descriptor

  16. Swinnen, K., Reijniers, J., Breno, M., Leirs, H.: A novel method to reduce time investment when processing videos from camera trap studies. PLoS ONE 9, e98881 (2014). https://doi.org/10.1371/journal.pone.0098881

    Article  Google Scholar 

  17. Szegedy, C., et al.: Going deeper with convolutions. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9, June 2015. https://doi.org/10.1109/CVPR.2015.7298594

  18. Villa, A.G., Salazar, A., Vargas, F.: Towards automatic wild animal monitoring: identification of animal species in camera-trap images using very deep convolutional neural networks. Ecol. Inform. 41, 24–32 (2017). https://doi.org/10.1016/j.ecoinf.2017.07.004. https://www.sciencedirect.com/science/article/pii/S1574954116302047

    Article  Google Scholar 

  19. Yu, X., Wang, J., Kays, R., Jansen, P.A., Wang, T., Huang, T.: Automated identification of animal species in camera trap images. EURASIP J. Image Video Process. 2013(1), 1–10 (2013). https://doi.org/10.1186/1687-5281-2013-52

    Article  Google Scholar 

  20. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, October, pp. 2242–2251 (2017). https://doi.org/10.1109/ICCV.2017.244

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Renwu Gao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Gao, R., Zheng, S., He, J., Shen, L. (2020). CycleGAN-Based Image Translation for Near-Infrared Camera-Trap Image Recognition. In: Lu, Y., Vincent, N., Yuen, P.C., Zheng, WS., Cheriet, F., Suen, C.Y. (eds) Pattern Recognition and Artificial Intelligence. ICPRAI 2020. Lecture Notes in Computer Science(), vol 12068. Springer, Cham. https://doi.org/10.1007/978-3-030-59830-3_39

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-59830-3_39

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-59829-7

  • Online ISBN: 978-3-030-59830-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics