Skip to main content

Disciplines of AI: An Overview of Approaches and Techniques

  • Chapter
  • First Online:
Law and Artificial Intelligence

Part of the book series: Information Technology and Law Series ((ITLS,volume 35))

Abstract

This chapter provides an introduction to AI for people without a background in technology. After examining different definitions of AI and a discussion of the scope of the concept AI, five different disciplines of AI are discussed: Machine Learning, Automated Reasoning, Computer Vision, Affective Computing and Natural Language Processing. For each discipline of AI, approaches and techniques are discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 99.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    See for example https://www.independent.co.uk/topic/artificial-intelligence, https://www.youtube.com/results?search_query=Artificial+intelligence, https://www.reddit.com/r/artificial/, accessed 10 September 2021.

  2. 2.

    Minsky 1968.

  3. 3.

    Bellman 1978, p. 3.

  4. 4.

    Nilsson 2010, preface xiii.

  5. 5.

    Russell and Norvig 2016, p. 2.

  6. 6.

    Munakata 2008, p. 2.

  7. 7.

    https://www.livescience.com/59065-deep-blue-garry-kasparov-chess-match-anniversary.html, accessed 10 September 2021.

  8. 8.

    Warwick 2012, p. 65.

  9. 9.

    Kline 2011, pp. 4–5, see also Chap. 2.

  10. 10.

    Shi 2011, p. 18.

  11. 11.

    In the second decade of the twenty-first century, AI has morphed into a full-fledged summer with significant growth see Franklin 2014, p. 28.

  12. 12.

    Turing 1950, pp. 433–460.

  13. 13.

    Bernhardt 2016, p. 157.

  14. 14.

    Franklin 2014, p. 17.

  15. 15.

    Turing 1950, pp. 433–460.

  16. 16.

    Bernhardt 2016, p. 157.

  17. 17.

    Turing 1950, pp. 433–460.

  18. 18.

    Bernhardt 2016, p. 157.

  19. 19.

    See Sect. 3.3.2 below.

  20. 20.

    Franklin 2014, pp. 17, 18.

  21. 21.

    Turing 1950, p. 442.

  22. 22.

    Hernández-Orallo 2017, p. 405; for more objections and criticism on the Turing test, see <https://plato.stanford.edu/entries/turing-test/> accessed 10 September 2021.

  23. 23.

    Hernández-Orallo 2017, p. 405.

  24. 24.

    Alonso 2014, pp. 235, 236.

  25. 25.

    Ibid., 235.

  26. 26.

    Russell and Norvig 2016, p. 39.

  27. 27.

    Strauß 2018, p. 7.

  28. 28.

    This figure shall not be considered as a complete overview of all disciplines of AI, but serves as an illustrative overview for this chapter.

  29. 29.

    Russell and Norvig 2016, p. 2.

  30. 30.

    E.g. the issue of whether or not represent knowledge, Franklin 2014, pp. 24, 25.

  31. 31.

    Russell and Norvig 2016, pp. 2, 3.

  32. 32.

    Kotu and Bala 2019, p. 2.

  33. 33.

    Mitchell 2006, p. 1.

  34. 34.

    Russell and Norvig 2016, p. 2.

  35. 35.

    Murphy 2012, p. 1.

  36. 36.

    Mohri et al. 2012, p. 1.

  37. 37.

    Goodfellow et al. 2016, p. 97.

  38. 38.

    Mohri et al. 2012, p. 1.

  39. 39.

    Marsland 2015, Chapter 1.2.1.

  40. 40.

    Mohri et al. 2012, p. 1.

  41. 41.

    Murphy 2012, pp. 1, 4.

  42. 42.

    Mohri et al. 2012, p. 2.

  43. 43.

    Russell and Norvig 2016, p. 695.

  44. 44.

    Murphy 2012, p. 3.

  45. 45.

    Bishop 2006, p. 2.

  46. 46.

    Mohri et al. 2012, p. 7.

  47. 47.

    Alpaydin 2016, p. 39.

  48. 48.

    Munakata 2008, p. 38.

  49. 49.

    Russell and Norvig 2016, p. 695.

  50. 50.

    Alpaydin 2016, p. 39.

  51. 51.

    Usuelli 2014, p. 155.

  52. 52.

    Usuelli 2014, p. 154.

  53. 53.

    Calders and Custers 2013, p. 32.

  54. 54.

    However, note that decision tree regression would not be considered as traditional statistics.

  55. 55.

    Murphy 2012, p. 9.

  56. 56.

    Mohri et al. 2012, p. 7.

  57. 57.

    Murphy 2012, p. 9.

  58. 58.

    Munakata 2008, p. 38.

  59. 59.

    Russell and Norvig 2016, p. 694.

  60. 60.

    Hastie et al. 2008, p. xi.

  61. 61.

    Marsland 2015, chapter 1.3

  62. 62.

    Mohri et al. 2012, p. 7.

  63. 63.

    Usuelli 2014, p. 164.

  64. 64.

    Munakata 2008, p. 72.

  65. 65.

    Mohri et al. 2012, p. 2.

  66. 66.

    Mohri et al. 2012, p.

  67. 67.

    Murphy 2012, p. 11.

  68. 68.

    Alpaydin 2016, pp. 115, 116.

  69. 69.

    Murphy 2012, p. 11.

  70. 70.

    Ibid., 12.

  71. 71.

    Murphy 2012, p. 11.

  72. 72.

    Mohri et al. 2012, p. 2.

  73. 73.

    Goodfellow et al. 2016, p. 104.

  74. 74.

    François-Lavet et al. 2018, pp. 2, 15.

  75. 75.

    Alpaydin 2016, p. 127.

  76. 76.

    Engelbrecht 2007, p. 83.

  77. 77.

    Shi 2011, p. 365

  78. 78.

    Engelbrecht 2007, p. 83.

  79. 79.

    Mohri et al. 2012, p. 8.

  80. 80.

    Alpaydin 2016, p. 126.

  81. 81.

    Shi 2011, p. 362.

  82. 82.

    Das et al. 2015, pp. 31, 32.

  83. 83.

    Serban et al. 2017, p. 1.

  84. 84.

    François-Lavet et al. 2018, p. 3.

  85. 85.

    Deng and Liu 2018a, b, p. 316.

  86. 86.

    Alpaydin 2016, p. 86.

  87. 87.

    Chow and Cho 2007, p. 2.

  88. 88.

    Singh Gill 2019, Overview of Artificial Neural Networks and its application <https://www.xenonstack.com/blog/artificial-neural-network-applications/> accessed 10 September 2021.

  89. 89.

    Munakata 2008, p. 7.

  90. 90.

    Alpaydin 2016, p. 86.

  91. 91.

    Chow and Cho 2007, p. 2.

  92. 92.

    Rumelhart et al. 1986, p. 533.

  93. 93.

    Munakata 2008, pp. 3, 7.

  94. 94.

    Goodfellow et al. 2016, p. 13.

  95. 95.

    Munakata 2008, p. 9.

  96. 96.

    Nielsen 2015, chapter 5.

  97. 97.

    Munakata 2008, p. 10.

  98. 98.

    Goodfellow et al. 2016, p. 6.

  99. 99.

    Ibid., p. 8.

  100. 100.

    Murphy 2012, p. 95.

  101. 101.

    Goodfellow et al. 2016, p. 8.

  102. 102.

    Murphy 2012, p. 995.

  103. 103.

    Alpaydin 2016, p. 104.

  104. 104.

    Goldberg 2017, p. 2.

  105. 105.

    Goodfellow et al. 2016, pp. 1, 5.

  106. 106.

    Goodfellow et al. 2016, pp. 16, 21, 25.

  107. 107.

    Deng and Liu 2018a, b, pp. 11, 12.

  108. 108.

    Munakata 2008, p. 44.

  109. 109.

    Goodfellow et al. 2016, p. 16.

  110. 110.

    Alpaydin 2016, p. 155.

  111. 111.

    Ibid.

  112. 112.

    Munakata 2008, p. 44.

  113. 113.

    Ibid., pp. 12, 25, 35.

  114. 114.

    A production model has fixed weights after training. To continuously update weights is possible, but by no means necessary.

  115. 115.

    De Laat 2017, p. 14.

  116. 116.

    Chow and Cho 2007, pp. 1–2.; Mitchell 2006, p. 95.

  117. 117.

    Chow and Cho 2007, p. 2.

  118. 118.

    Moses et al. 2019, p. 10.

  119. 119.

    Yang et al. 2015, p. 1124.

  120. 120.

    Weiler N (2019) Breakthrough device translates brain activity into speech, https://www.universityofcalifornia.edu/news/synthetic-speech-generated-brain-recordings. Accessed 10 September 2021.

  121. 121.

    Tech@Facebook (2020) Imagining a new interface: Hands-free communication without saying a word https://tech.fb.com/imagining-a-new-interface-hands-free-communication-without-saying-a-word/. Accessed 10 September 2021.

  122. 122.

    Jurafsky and Martin 2014, p. 1.

  123. 123.

    Franklin 2014, p. 26.

  124. 124.

    Deng and Liu 2018a, b, p. 1.

  125. 125.

    Deng and Liu 2018a, b, p. 316.

  126. 126.

    Abhang et al. 2016, p. 13.

  127. 127.

    Tur et al. 2018, p. 24.

  128. 128.

    Ibid., p. 24.

  129. 129.

    Alpaydin 2016, p. 67.

  130. 130.

    Alpaydin 2016, p. 67.

  131. 131.

    Mary 2019, p. 1, 8.

  132. 132.

    Jurafsky and Martin 2014, p. 238.

  133. 133.

    Hourri and Kharroubi 2020, p. 123.

  134. 134.

    Heigold et al. 2015, p. 1.

  135. 135.

    Mary 2019, p. 7.

  136. 136.

    Abhang et al. 2016, pp. 14, 105.

  137. 137.

    See services of the company audeering: https://www.audeering.com/ Accessed 10 September 2021.

  138. 138.

    Russell and Norvig 2016, p. 3.

  139. 139.

    Franklin 2014, p. 26.

  140. 140.

    Amit et al. 2021, p. 875.

  141. 141.

    Yoshida 2011, p. vii.

  142. 142.

    Jampani 2017, p. 1.

  143. 143.

    Szeliski 2011, pp. 3, 5.

  144. 144.

    Sokolova and Konushin 2019, p. 213.

  145. 145.

    Kovač et al. 2019, pp. 5621, 5622.

  146. 146.

    See https://www.amazon.com/b?ie=UTF8&node=16008589011 and Mavroudis and Veale 2018, p. 6.

  147. 147.

    For example, cameras offer a high level of precision, but might be too expensive to cover the whole shop. Beacons are not self-sufficient to provide tracking data for customer analysis, but can cover a wider operational range. Combined by means of sensor fusion, the sensors allow precise consumer path tracking. See Sturari et al. 2016, pp. 30, 31, 40.

  148. 148.

    Carey and Macaulay (2018) Amazon Go looks convenient, but raises huge questions over privacy, https://www.techworld.com/business/amazon-go-looks-amazing-but-at-what-cost-3651434/. Accessed 10 September 2021.

  149. 149.

    Kumar et al. 2019, Detecting item interaction and movement, US Patent Number US 10268983 (Assignee: Amazon Technologies, Inc.) https://patentimages.storage.googleapis.com/01/0b/6e/de57009f5670ae/US20150019391A1.pdf Accessed 10 September 2021.

  150. 150.

    Trigueros et al. 2018, p. 1.

  151. 151.

    Li and Jain 2011, p. 1.

  152. 152.

    Ibid.; Trigueros et al. 2018, p. 1.

  153. 153.

    Trigueros et al. 2018, p. 1.

  154. 154.

    Li and Jain 2011, p. 3.

  155. 155.

    Ibid., 4; Trigueros et al. 2018, p. 1

  156. 156.

    Tome et al. 2015, pp. 271, 273.

  157. 157.

    Tome et al. 2015, pp. 271, 273.

  158. 158.

    Trigueros et al. 2018, p. 1.

  159. 159.

    Goodfellow et al. 2016, p. 326.

  160. 160.

    Li and Jain 2011, p. 3.

  161. 161.

    Welinder and Palmer 2018, p. 104.

  162. 162.

    Zuiderveen Borgesius 2019, p. 17.

  163. 163.

    Regan J (2016) New Zealand passport robot tells applicant of Asian descent to open eyes, https://www.reuters.com/article/us-newzealand-passport-error/new-zealand-passport-robot-tells-applicant-of-asian-descent-to-open-eyes-idUSKBN13W0RL. Accessed 10 September 2021.

  164. 164.

    Calvo et al. 2015, p. 2.

  165. 165.

    Picard 1997, p. 47.

  166. 166.

    Calvo 2015, p. 4.

  167. 167.

    Marechal et al. 2019, pp. 314, 315.

  168. 168.

    Kanjo et al. 2015, pp. 1197, 1204.

  169. 169.

    Barrett et al. 2019, pp. 1, 19.

  170. 170.

    Cohn and De La Torre 2015, p. 132.

  171. 171.

    Rosenberg 2005, pp. 14, 16.

  172. 172.

    Keltner et al. 2019, pp. 133, 142.

  173. 173.

    Valstar 2015, p. 144.

  174. 174.

    Cohn and De La Torre 2015, p. 137.

  175. 175.

    Tzirakis et al. 2015, p. 1.

  176. 176.

    Bartlett et al. 2005, p. 395.

  177. 177.

    A set of functions and procedures allowing the creation of applications that access features or data of an operating system, application or other service, see https://www.lexico.com/definition/api. Accessed 3 August 2020.

  178. 178.

    See https://www.kairos.com/docs/api/#get-v2media. Accessed 10 September 2021.

  179. 179.

    Pascu L (2019) New Kairos Facial Recognition Camera Offers Customer Insights, https://www.biometricupdate.com/201909/new-kairos-facial-recognition-camera-offers-customer-insights. Accessed 10 September 2021.

  180. 180.

    See https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-rekognition-improves-face-analysis/. Accessed 10 September 2021.

  181. 181.

    Note that the system is only a research project funded by the EU under the H2020 programme and it remains to be seen whether it will be used at the border in the future. European Commission (2018), Smart lie-detection system to tighten EU's busy borders, https://ec.europa.eu/research/infocentre/article_en.cfm?artid=49726 Accessed 10 September 2021.

  182. 182.

    Chen A and Hao K (2020) Emotion AI researchers say overblown claims give their work a bad name, https://www.technologyreview.com/2020/02/14/844765/ai-emotion-recognition-affective-computing-hirevue-regulation-ethics// Accessed 10 September 2021.

  183. 183.

    Mondragon N et al. (2019) The Next Generation of Assessments, http://hrlens.org/wp-content/uploads/2019/11/The-Next-Generation-of-Assessments-HireVue-White-Paper.pdf. Accessed 10 September 2021.

  184. 184.

    Cohn and De La Torre 2015, p. 132.

  185. 185.

    Lee et al. 2015, p. 171.

  186. 186.

    For prosody, see Sect. 3.3.2 above.

  187. 187.

    Calix et al. 2012, pp. 530, 531.

  188. 188.

    Sobin and Alpert 1999, p. 347.

  189. 189.

    Jurafsky and Martin 2014, p. 238.

  190. 190.

    Chuang and Wu 2004, pp. 45, 62.

  191. 191.

    Picard 1997, pp. 179, 180.

  192. 192.

    Lee et al. 2015, p. 173, 177.

  193. 193.

    Zbancioc and Feraru 2015, p. 1.

  194. 194.

    Tomba et al. 2018, p. 560.

  195. 195.

    Fayek et al. 2017, p. 60.

  196. 196.

    Russell and Norvig 2016, p. 2.

  197. 197.

    Jebelean 2009, p. 63.

  198. 198.

    Eyal 2014, p. 191.

  199. 199.

    Ibid.

  200. 200.

    Jebelean 2009, p. 63.

  201. 201.

    Eyal 2014, p. 193.

  202. 202.

    Harrison 2009, p. 1.

  203. 203.

    Eyal 2014, p. 201.

  204. 204.

    Gavanelli and Mancini 2013, p. 113.

  205. 205.

    Davis and Morgenstern 2004, p. 1.

  206. 206.

    Shoham et al. 2018, p. 64.

  207. 207.

    Metz P (2018) Paul Allen Wants to Teach Machines Common Sense, https://www.nytimes.com/2018/02/28/technology/paul-allen-ai-common-sense.html. Accessed 10 September 2021.

  208. 208.

    Tandon et al. 2017, p. 49.

References

  • Abhang P et al (2016) Introduction to EEG- and speech-based emotion recognition. Elsevier Inc, London.

    Google Scholar 

  • Alonso E (2014) Actions and agents. In: Frankish K, Ramsey W (eds) The Cambridge Handbook of Artificial Intelligence. Cambridge University Press, Cambridge.

    Google Scholar 

  • Alpaydin E (2016) Machine Learning: The New AI. MIT Press, Cambridge.

    Google Scholar 

  • Amit Y et al (2021) Object Detection. In: Ikeuchi K (ed) Computer Vision—A Reference Guide. Springer, Boston.

    Google Scholar 

  • Barrett LF et al (2019) Emotional Expressions Reconsidered. Psychological Science in the Public Interest 20(1):1–68.

    Google Scholar 

  • Bartlett M et al (2005) Toward Automatic Recognition of Spontaneous Facial Actions. In: Ekman P, Rosenberg E (eds) What the Face Reveals. OUP, Oxford.

    Google Scholar 

  • Bellman R (1978) An Introduction to Artificial Intelligence: Can computers think? Boyd & Faser, San Francisco.

    Google Scholar 

  • Bernhardt C (2016) Turing’s Vision: The Birth of Computer Science. MIT Press, Cambridge.

    Google Scholar 

  • Bishop C (2006) Pattern Recognition and Machine Learning. Springer, New York.

    Google Scholar 

  • Calders T, Custers B (2013) What is Data Mining and How Does it Work? In: Custers B et al (eds) Discrimination and Privacy in the Information Society. Springer, Berlin.

    Google Scholar 

  • Calix R et al (2012) Detection of Affective States from Text and Speech For Real-Time Human-Computer Interaction. Human Factors and Ergonomics Society 54(4):530–545.

    Google Scholar 

  • Calvo R et al (2015) Introduction to Affective Computing. In: Calvo R et al (eds) The Oxford Handbook of Affective Computing. OUP, Oxford.

    Google Scholar 

  • Chen A, Hao K (2020) Emotion AI researchers say overblown claims give their work a bad name. https://www.technologyreview.com/2020/20/14/844765/ai-emotion-recognition-affective-computing-hirevue-regulation-ethics//. Accessed 10 September 2021

  • Chow T, Cho SY (2007) Neural Networks and Computing: Learning Algorithms and Applications. Imperial College Press, London

    Google Scholar 

  • Chuang Z, Wu C (2004) Multi-Modal Emotion Recognition from Speech and Text. (2004) Vol. 9 No. 2 Computational Linguistics and Chinese Language Processing 9(2):45–62.

    Google Scholar 

  • Cohn J, De La Torre F (2015) Automated Face Analysis for Affective Computing. In: Calvo R et al (eds) The Oxford Handbook of Affective Computing. OUP, Oxford.

    Google Scholar 

  • Das S et al (2015) Applications of Artificial Intelligence in Machine Learning: Review and Prospect. 15), IJCA 115(9):31–41.

    Google Scholar 

  • Davis E, Morgenstern L (2004) Introduction: Progress in formal commonsense reasoning. Artificial Intelligence 153:1–12.

    Google Scholar 

  • De Laat P (2017) Algorithmic Decision-Making based on Machine Learning from Big Data: Can Transparency restore Accountability. Philos. Technol 31(4):525–541.

    Google Scholar 

  • Deng L, Liu Y (2018) A Joint Introduction to Natural Language Processing and Deep Learning. In: Deng L, Liu Y (eds) Deep learning in natural language processing. Springer, Singapore.

    Google Scholar 

  • Deng L, Liu Y (2018) Epilogue: Frontiers of NLP in the Deep Learning Era. In: Deng L, Liu Y (eds) Deep learning in natural language processing. Springer, Singapore.

    Google Scholar 

  • Engelbrecht A (2007) Computational Intelligence – An Introduction. John Wiley & Sons, Hoboken.

    Google Scholar 

  • Eyal A (2014) Reasoning and decision making. In: Frankish K, Ramsey W (eds) The Cambridge Handbook of Artificial Intelligence. Cambridge University Press, Cambridge.

    Google Scholar 

  • Fayek H et al (2017) Evaluating deep learning architectures for Speech Emotion Recognition. Neural Networks 92:60–68.

    Google Scholar 

  • François-Lavet V et al (2018) An Introduction to Deep Reinforcement Learning. Foundations and Trends in Machine Learning 11 (3-4):2–140.

    Google Scholar 

  • Franklin S (2014) History, motivations, and core themes. In: Frankish K, Ramsey W (eds) The Cambridge Handbook of Artificial Intelligence. Cambridge University Press, Cambridge.

    Google Scholar 

  • Gavanelli M, Mancini T (2013) Automated Reasoning. Intelligenza Artificiale 7(2):113–124.

    Google Scholar 

  • Goldberg Y (2017) Neural Network Methods in Natural Language Processing. Morgan & Claypool Publishers, San Rafael.

    Google Scholar 

  • Goodfellow I et al (2016) Deep Learning. MIT Press, Cambridge www.deeplearningbook.org Accessed 10 September 2021.

  • Harrison J (2009) Handbook of Practical Logic and Automated Reasoning. Cambridge University Press, Cambridge.

    Google Scholar 

  • Hastie T et al (2008) The Elements of Statistical Learning. Springer, New York.

    Google Scholar 

  • Heigold G et al (2015) End-to-End Text-Dependent Speaker Verification. https://arXiv.org/pdf/1509.08062.pdf Accessed 10 September 2021.

  • Hernández-Orallo J (2017) Evaluation in Artificial Intelligence: from task-oriented to ability-oriented measurement. Artificial Intelligence Review 48:397–447.

    Google Scholar 

  • Hourri S, Kharroubi J (2020) A deep learning approach for speaker recognition. International Journal of Speech and Technology 23(1):123–131.

    Google Scholar 

  • Jampani V (2017) Learning Inference Models for Computer Vision. https://publikationen.uni-tuebingen.de/xmlui/handle/10900/76486 Accessed 10 September 2021.

  • Jebelean T et al (2009) Automated Reasoning. In: Buchberger B et al (eds) Hagenberg Research. Springer, Berlin.

    Google Scholar 

  • Jurafsky D, Martin J (2014) Speech and Language Processing. Pearson Education Limited, New Jersey.

    Google Scholar 

  • Kanjo E et al (2015) Emotions in context: examining pervasive affective sensing systems, applications, and analyses. Pers Ubiquit Comput 19:1197–1212.

    Google Scholar 

  • Keltner D et al (2019) Emotional Expression: Advances in Basic Emotion Theory. Journal of Nonverbal Behaviour 43(2):133–160.

    Google Scholar 

  • Kline R (2011) Cybernetics, Automata Studies, and the Dartmouth Conference on Artificial Intelligence. IEEE 33(4):5–16.

    Google Scholar 

  • Kotu V, Bala D (2019) Data Science. Elsevier, Cambridge.

    Google Scholar 

  • Kovač V et al (2019) Frame-based classification for cross-speed gait recognition. Multimedia Tools and Applications 78:5621–5643.

    Google Scholar 

  • Kumar D et al (2019) Detecting item interaction and movement US Patent Number US 10268983. https://patentimages.storage.googleapis.com/01/0b/6e/de57009f5670ae/US20150019391A1.pdf Accessed 10 September 2021.

  • Lee C et al (2015) Speech in Affective Computing. The Oxford Handbook of Affective Computing. OUP, Oxford.

    Google Scholar 

  • Li St, Jain A (2011) Introduction. In: Li S, Jain A (eds) Handbook of Face Recognition. Springer, London.

    Google Scholar 

  • Marechal C et al (2019) Survey on AI-Based Multimodal Methods for Emotion Detection. In: Kołodziej J, González-Vélez H (eds) High-Performance Modelling and Simulation for Big Data Applications. Springer, Cham.

    Google Scholar 

  • Marsland S (2015) Machine Learning: An Algorithmic Perspective. Chapman & Hall, Boca Raton.

    Google Scholar 

  • Mary L (2019) Extraction of Prosody for Automatic Speaker, Language, Emotion and Speech Recognition. Springer International, Cham.

    Google Scholar 

  • Mavroudis V, Veale M (2018) Eavesdropping Whilst You’re Shopping: Balancing Personalisation and Privacy in Connected Retail Spaces. https://doi.org/10.1049/cp.2018.0018.

  • Metz P (2018) Paul Allen Wants to Teach Machines Common Sense. https://www.nytimes.com/2018/02/28/technology/paul-allen-ai-common-sense.html. Accessed 10 September 2021

  • Minsky M (1968) Semantic Information Processing. MIT Press, Cambridge.

    Google Scholar 

  • Mitchell T (2006) The discipline of Machine Learning. http://www.cs.cmu.edu/~tom/pubs/MachineLearning.pdf Accessed 10 September 2021.

  • Mohri M et al (2012) Foundations of Machine Learning. MIT Press, Cambridge.

    Google Scholar 

  • Moses D et al (2019) Real-time decoding of question-and-answer speech dialogue using human cortical activity. Nature Communication 10:1-14.

    Google Scholar 

  • Munakata T (2008) Fundamentals of the New Artificial Intelligence. Springer, London.

    Google Scholar 

  • Murphy K (2012) Machine Learning: A Probabilistic Perspective. MIT Press, Cambridge.

    Google Scholar 

  • Nielsen M (2015) Neural Networks and Deep Learning. Determination Press, http://neuralnetworksanddeeplearning.com/chap5.html Accessed 10 September 2021.

  • Nilsson N (2010) The Quest for Artificial Intelligence: A History of Ideas and Achievements. Cambridge University Press, Cambridge.

    Google Scholar 

  • Pascu L (2019) New Kairos Facial Recognition Camera Offers Customer Insights. https://www.biometricupdate.com/201909/new-kairos-facial-recognition-camera-offers-customer-insights. Accessed 10 September 2021

  • Picard R (1997) Affective Computing. MIT Press, Cambridge.

    Google Scholar 

  • Regan J (2016) New Zealand passport robot tells applicant of Asian descent to open eyes. https://www.reuters.com/article/us-newzealand-passport-error/new-zealand-passport-robot-tells-applicant-of-asian-descent-to-open-eyes-idUSKBN13W0RL. Accessed 10 September 2021

  • Rosenberg E (2005) Introduction: The Study of Spontaneous Facial Expressions in Psychology. In: Ekman P, Rosenberg E (eds) What the Face Reveals. OUP, Oxford.

    Google Scholar 

  • Rumelhart D et al (1986) Learning representations by backpropagating errors. Nature 323:533–536.

    Google Scholar 

  • Russell S, Norvig P (2016) Artificial Intelligence: A Modern Approach. Pearson Education Limited, Essex.

    Google Scholar 

  • Serban I et al (2017) A Deep Reinforcement Learning Chatbot, https://arXiv.org/pdf/1709.02349.pdf Accessed 10 September 2021.

  • Shi Z 2011 Advanced Artificial Intelligence World Scientific Singapore 2011.

    Google Scholar 

  • Shoham Y et al (2018) The AI Index 2018 Annual Report. https://hai.stanford.edu/sites/default/files/2020-10/AI_Index_2018_Annual_Report.pdf Accessed 10 September 2021.

  • Sobin C, Alpert M (1999) Emotion in Speech: The Acoustic Attributes of Fear, Anger, Sadness, and Joy. Journal of Psycholinguistic Research 28(4):347–365.

    Google Scholar 

  • Sokolova A, Konushin A (2019) Methods of Gait Recognition in Video. Programming and Computer Software 45(4):213–220.

    Google Scholar 

  • Strauß S (2018) From Big Data to Deep Learning: A Leap Towards Strong AI or Intelligentia Obscura. BDCC 2(3):2–16.

    Google Scholar 

  • Sturari M et al (2016) Robust and affordable retail customer profiling by vision and radio beacon sensor fusion. Pattern Recognition Letters 81:30–40.

    Google Scholar 

  • Szeliski R (2011) Computer Vision: Algorithms and Applications. Springer, London.

    Google Scholar 

  • Tandon N et al (2017) Commonsense Knowledge in Machine Intelligence. SIGMOD Record 46(4):49–52.

    Google Scholar 

  • Tomba K et al (2018), Stress Detection Through Speech Analysis. ICETE 1:394–398.

    Google Scholar 

  • Tome P et al (2015) Facial soft biometric features for forensic face recognition. Forensic Science International 257:271–284.

    Google Scholar 

  • Trigueros D et al (2018) Face recognition: From Traditional to Deep Learning Methods. https://arXiv.org/pdf/1811.00116.pdf Accessed 10 September 2021.

  • Tur G et al (2018) Deep Learning in Conversational Language Understanding. In: Deng L, Liu Y (eds) Deep learning in natural language processing. Springer, Singapore.

    Google Scholar 

  • Turing A (1950) Computing Machinery and Intelligence. Mind LIX(236):433–460.

    Google Scholar 

  • Tzirakis P et al (2015) End-to-End Multimodal Emotion Recognition using Deep Neural Networks. Journal of Latex Class Files 14(8):1–12.

    Google Scholar 

  • Usuelli M (2014) R machine learning essentials. Packt Publishing, Birmingham.

    Google Scholar 

  • Valstar M (2015) Automatic Facial Expression Analysis. In: Mandal M, Awasthi A (eds) Understanding Facial Expressions in Communication. Springer, New Delhi.

    Google Scholar 

  • Warwick K (2012) Artificial Intelligence: The basics. Routledge, New York.

    Google Scholar 

  • Welinder Y, Palmer A (2018) Face Recognition, Real-Time Identification, and Beyond. In: Selinger E et al (eds) The Cambridge Handbook of Consumer Privacy. Cambridge University Press, Cambridge.

    Google Scholar 

  • Yang M et al (2015) Speech Reconstruction from Human Auditory Cortex with Deep Neural Networks. https://www.isca-speech.org/archive_v0/interspeech_2015/papers/i15_1121.pdf Accessed 10 September 2021.

  • Yoshida S (2011) Computer Vision. Nova Science Publisher Inc, Lancaster.

    Google Scholar 

  • Zbancioc M, Feraru S (2015) A study about the automatic recognition of the anxiety emotional state using Emo-DB. DOI: https://doi.org/10.1109/EHB.2015.7391506.

  • Zuiderveen F (2019) Discrimination, artificial intelligence, and algorithmic decision-making. https://rm.coe.int/discrimination-artificial-intelligence-and-algorithmic-decision-making/1680925d73 Accessed 10 September 2021.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andreas Häuselmann .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 T.M.C. Asser Press and the authors

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Häuselmann, A. (2022). Disciplines of AI: An Overview of Approaches and Techniques. In: Custers, B., Fosch-Villaronga, E. (eds) Law and Artificial Intelligence. Information Technology and Law Series, vol 35. T.M.C. Asser Press, The Hague. https://doi.org/10.1007/978-94-6265-523-2_3

Download citation

  • DOI: https://doi.org/10.1007/978-94-6265-523-2_3

  • Published:

  • Publisher Name: T.M.C. Asser Press, The Hague

  • Print ISBN: 978-94-6265-522-5

  • Online ISBN: 978-94-6265-523-2

  • eBook Packages: Law and CriminologyLaw and Criminology (R0)

Publish with us

Policies and ethics