Abstract
This chapter provides an introduction to AI for people without a background in technology. After examining different definitions of AI and a discussion of the scope of the concept AI, five different disciplines of AI are discussed: Machine Learning, Automated Reasoning, Computer Vision, Affective Computing and Natural Language Processing. For each discipline of AI, approaches and techniques are discussed.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
See for example https://www.independent.co.uk/topic/artificial-intelligence, https://www.youtube.com/results?search_query=Artificial+intelligence, https://www.reddit.com/r/artificial/, accessed 10 September 2021.
- 2.
Minsky 1968.
- 3.
Bellman 1978, p. 3.
- 4.
Nilsson 2010, preface xiii.
- 5.
Russell and Norvig 2016, p. 2.
- 6.
Munakata 2008, p. 2.
- 7.
https://www.livescience.com/59065-deep-blue-garry-kasparov-chess-match-anniversary.html, accessed 10 September 2021.
- 8.
Warwick 2012, p. 65.
- 9.
- 10.
Shi 2011, p. 18.
- 11.
In the second decade of the twenty-first century, AI has morphed into a full-fledged summer with significant growth see Franklin 2014, p. 28.
- 12.
Turing 1950, pp. 433–460.
- 13.
Bernhardt 2016, p. 157.
- 14.
Franklin 2014, p. 17.
- 15.
Turing 1950, pp. 433–460.
- 16.
Bernhardt 2016, p. 157.
- 17.
Turing 1950, pp. 433–460.
- 18.
Bernhardt 2016, p. 157.
- 19.
See Sect. 3.3.2 below.
- 20.
Franklin 2014, pp. 17, 18.
- 21.
Turing 1950, p. 442.
- 22.
Hernández-Orallo 2017, p. 405; for more objections and criticism on the Turing test, see <https://plato.stanford.edu/entries/turing-test/> accessed 10 September 2021.
- 23.
Hernández-Orallo 2017, p. 405.
- 24.
Alonso 2014, pp. 235, 236.
- 25.
Ibid., 235.
- 26.
Russell and Norvig 2016, p. 39.
- 27.
Strauß 2018, p. 7.
- 28.
This figure shall not be considered as a complete overview of all disciplines of AI, but serves as an illustrative overview for this chapter.
- 29.
Russell and Norvig 2016, p. 2.
- 30.
E.g. the issue of whether or not represent knowledge, Franklin 2014, pp. 24, 25.
- 31.
Russell and Norvig 2016, pp. 2, 3.
- 32.
Kotu and Bala 2019, p. 2.
- 33.
Mitchell 2006, p. 1.
- 34.
Russell and Norvig 2016, p. 2.
- 35.
Murphy 2012, p. 1.
- 36.
Mohri et al. 2012, p. 1.
- 37.
Goodfellow et al. 2016, p. 97.
- 38.
Mohri et al. 2012, p. 1.
- 39.
Marsland 2015, Chapter 1.2.1.
- 40.
Mohri et al. 2012, p. 1.
- 41.
Murphy 2012, pp. 1, 4.
- 42.
Mohri et al. 2012, p. 2.
- 43.
Russell and Norvig 2016, p. 695.
- 44.
Murphy 2012, p. 3.
- 45.
Bishop 2006, p. 2.
- 46.
Mohri et al. 2012, p. 7.
- 47.
Alpaydin 2016, p. 39.
- 48.
Munakata 2008, p. 38.
- 49.
Russell and Norvig 2016, p. 695.
- 50.
Alpaydin 2016, p. 39.
- 51.
Usuelli 2014, p. 155.
- 52.
Usuelli 2014, p. 154.
- 53.
Calders and Custers 2013, p. 32.
- 54.
However, note that decision tree regression would not be considered as traditional statistics.
- 55.
Murphy 2012, p. 9.
- 56.
Mohri et al. 2012, p. 7.
- 57.
Murphy 2012, p. 9.
- 58.
Munakata 2008, p. 38.
- 59.
Russell and Norvig 2016, p. 694.
- 60.
Hastie et al. 2008, p. xi.
- 61.
Marsland 2015, chapter 1.3
- 62.
Mohri et al. 2012, p. 7.
- 63.
Usuelli 2014, p. 164.
- 64.
Munakata 2008, p. 72.
- 65.
Mohri et al. 2012, p. 2.
- 66.
Mohri et al. 2012, p.
- 67.
Murphy 2012, p. 11.
- 68.
Alpaydin 2016, pp. 115, 116.
- 69.
Murphy 2012, p. 11.
- 70.
Ibid., 12.
- 71.
Murphy 2012, p. 11.
- 72.
Mohri et al. 2012, p. 2.
- 73.
Goodfellow et al. 2016, p. 104.
- 74.
François-Lavet et al. 2018, pp. 2, 15.
- 75.
Alpaydin 2016, p. 127.
- 76.
Engelbrecht 2007, p. 83.
- 77.
Shi 2011, p. 365
- 78.
Engelbrecht 2007, p. 83.
- 79.
Mohri et al. 2012, p. 8.
- 80.
Alpaydin 2016, p. 126.
- 81.
Shi 2011, p. 362.
- 82.
Das et al. 2015, pp. 31, 32.
- 83.
Serban et al. 2017, p. 1.
- 84.
François-Lavet et al. 2018, p. 3.
- 85.
- 86.
Alpaydin 2016, p. 86.
- 87.
Chow and Cho 2007, p. 2.
- 88.
Singh Gill 2019, Overview of Artificial Neural Networks and its application <https://www.xenonstack.com/blog/artificial-neural-network-applications/> accessed 10 September 2021.
- 89.
Munakata 2008, p. 7.
- 90.
Alpaydin 2016, p. 86.
- 91.
Chow and Cho 2007, p. 2.
- 92.
Rumelhart et al. 1986, p. 533.
- 93.
Munakata 2008, pp. 3, 7.
- 94.
Goodfellow et al. 2016, p. 13.
- 95.
Munakata 2008, p. 9.
- 96.
Nielsen 2015, chapter 5.
- 97.
Munakata 2008, p. 10.
- 98.
Goodfellow et al. 2016, p. 6.
- 99.
Ibid., p. 8.
- 100.
Murphy 2012, p. 95.
- 101.
Goodfellow et al. 2016, p. 8.
- 102.
Murphy 2012, p. 995.
- 103.
Alpaydin 2016, p. 104.
- 104.
Goldberg 2017, p. 2.
- 105.
Goodfellow et al. 2016, pp. 1, 5.
- 106.
Goodfellow et al. 2016, pp. 16, 21, 25.
- 107.
- 108.
Munakata 2008, p. 44.
- 109.
Goodfellow et al. 2016, p. 16.
- 110.
Alpaydin 2016, p. 155.
- 111.
Ibid.
- 112.
Munakata 2008, p. 44.
- 113.
Ibid., pp. 12, 25, 35.
- 114.
A production model has fixed weights after training. To continuously update weights is possible, but by no means necessary.
- 115.
De Laat 2017, p. 14.
- 116.
- 117.
Chow and Cho 2007, p. 2.
- 118.
Moses et al. 2019, p. 10.
- 119.
Yang et al. 2015, p. 1124.
- 120.
Weiler N (2019) Breakthrough device translates brain activity into speech, https://www.universityofcalifornia.edu/news/synthetic-speech-generated-brain-recordings. Accessed 10 September 2021.
- 121.
Tech@Facebook (2020) Imagining a new interface: Hands-free communication without saying a word https://tech.fb.com/imagining-a-new-interface-hands-free-communication-without-saying-a-word/. Accessed 10 September 2021.
- 122.
Jurafsky and Martin 2014, p. 1.
- 123.
Franklin 2014, p. 26.
- 124.
- 125.
- 126.
Abhang et al. 2016, p. 13.
- 127.
Tur et al. 2018, p. 24.
- 128.
Ibid., p. 24.
- 129.
Alpaydin 2016, p. 67.
- 130.
Alpaydin 2016, p. 67.
- 131.
Mary 2019, p. 1, 8.
- 132.
Jurafsky and Martin 2014, p. 238.
- 133.
Hourri and Kharroubi 2020, p. 123.
- 134.
Heigold et al. 2015, p. 1.
- 135.
Mary 2019, p. 7.
- 136.
Abhang et al. 2016, pp. 14, 105.
- 137.
See services of the company audeering: https://www.audeering.com/ Accessed 10 September 2021.
- 138.
Russell and Norvig 2016, p. 3.
- 139.
Franklin 2014, p. 26.
- 140.
Amit et al. 2021, p. 875.
- 141.
Yoshida 2011, p. vii.
- 142.
Jampani 2017, p. 1.
- 143.
Szeliski 2011, pp. 3, 5.
- 144.
Sokolova and Konushin 2019, p. 213.
- 145.
Kovač et al. 2019, pp. 5621, 5622.
- 146.
See https://www.amazon.com/b?ie=UTF8&node=16008589011 and Mavroudis and Veale 2018, p. 6.
- 147.
For example, cameras offer a high level of precision, but might be too expensive to cover the whole shop. Beacons are not self-sufficient to provide tracking data for customer analysis, but can cover a wider operational range. Combined by means of sensor fusion, the sensors allow precise consumer path tracking. See Sturari et al. 2016, pp. 30, 31, 40.
- 148.
Carey and Macaulay (2018) Amazon Go looks convenient, but raises huge questions over privacy, https://www.techworld.com/business/amazon-go-looks-amazing-but-at-what-cost-3651434/. Accessed 10 September 2021.
- 149.
Kumar et al. 2019, Detecting item interaction and movement, US Patent Number US 10268983 (Assignee: Amazon Technologies, Inc.) https://patentimages.storage.googleapis.com/01/0b/6e/de57009f5670ae/US20150019391A1.pdf Accessed 10 September 2021.
- 150.
Trigueros et al. 2018, p. 1.
- 151.
Li and Jain 2011, p. 1.
- 152.
Ibid.; Trigueros et al. 2018, p. 1.
- 153.
Trigueros et al. 2018, p. 1.
- 154.
Li and Jain 2011, p. 3.
- 155.
Ibid., 4; Trigueros et al. 2018, p. 1
- 156.
Tome et al. 2015, pp. 271, 273.
- 157.
Tome et al. 2015, pp. 271, 273.
- 158.
Trigueros et al. 2018, p. 1.
- 159.
Goodfellow et al. 2016, p. 326.
- 160.
Li and Jain 2011, p. 3.
- 161.
Welinder and Palmer 2018, p. 104.
- 162.
Zuiderveen Borgesius 2019, p. 17.
- 163.
Regan J (2016) New Zealand passport robot tells applicant of Asian descent to open eyes, https://www.reuters.com/article/us-newzealand-passport-error/new-zealand-passport-robot-tells-applicant-of-asian-descent-to-open-eyes-idUSKBN13W0RL. Accessed 10 September 2021.
- 164.
Calvo et al. 2015, p. 2.
- 165.
Picard 1997, p. 47.
- 166.
Calvo 2015, p. 4.
- 167.
Marechal et al. 2019, pp. 314, 315.
- 168.
Kanjo et al. 2015, pp. 1197, 1204.
- 169.
Barrett et al. 2019, pp. 1, 19.
- 170.
Cohn and De La Torre 2015, p. 132.
- 171.
Rosenberg 2005, pp. 14, 16.
- 172.
Keltner et al. 2019, pp. 133, 142.
- 173.
Valstar 2015, p. 144.
- 174.
Cohn and De La Torre 2015, p. 137.
- 175.
Tzirakis et al. 2015, p. 1.
- 176.
Bartlett et al. 2005, p. 395.
- 177.
A set of functions and procedures allowing the creation of applications that access features or data of an operating system, application or other service, see https://www.lexico.com/definition/api. Accessed 3 August 2020.
- 178.
See https://www.kairos.com/docs/api/#get-v2media. Accessed 10 September 2021.
- 179.
Pascu L (2019) New Kairos Facial Recognition Camera Offers Customer Insights, https://www.biometricupdate.com/201909/new-kairos-facial-recognition-camera-offers-customer-insights. Accessed 10 September 2021.
- 180.
See https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-rekognition-improves-face-analysis/. Accessed 10 September 2021.
- 181.
Note that the system is only a research project funded by the EU under the H2020 programme and it remains to be seen whether it will be used at the border in the future. European Commission (2018), Smart lie-detection system to tighten EU's busy borders, https://ec.europa.eu/research/infocentre/article_en.cfm?artid=49726 Accessed 10 September 2021.
- 182.
Chen A and Hao K (2020) Emotion AI researchers say overblown claims give their work a bad name, https://www.technologyreview.com/2020/02/14/844765/ai-emotion-recognition-affective-computing-hirevue-regulation-ethics// Accessed 10 September 2021.
- 183.
Mondragon N et al. (2019) The Next Generation of Assessments, http://hrlens.org/wp-content/uploads/2019/11/The-Next-Generation-of-Assessments-HireVue-White-Paper.pdf. Accessed 10 September 2021.
- 184.
Cohn and De La Torre 2015, p. 132.
- 185.
Lee et al. 2015, p. 171.
- 186.
For prosody, see Sect. 3.3.2 above.
- 187.
Calix et al. 2012, pp. 530, 531.
- 188.
Sobin and Alpert 1999, p. 347.
- 189.
Jurafsky and Martin 2014, p. 238.
- 190.
Chuang and Wu 2004, pp. 45, 62.
- 191.
Picard 1997, pp. 179, 180.
- 192.
Lee et al. 2015, p. 173, 177.
- 193.
Zbancioc and Feraru 2015, p. 1.
- 194.
Tomba et al. 2018, p. 560.
- 195.
Fayek et al. 2017, p. 60.
- 196.
Russell and Norvig 2016, p. 2.
- 197.
Jebelean 2009, p. 63.
- 198.
Eyal 2014, p. 191.
- 199.
Ibid.
- 200.
Jebelean 2009, p. 63.
- 201.
Eyal 2014, p. 193.
- 202.
Harrison 2009, p. 1.
- 203.
Eyal 2014, p. 201.
- 204.
Gavanelli and Mancini 2013, p. 113.
- 205.
Davis and Morgenstern 2004, p. 1.
- 206.
Shoham et al. 2018, p. 64.
- 207.
Metz P (2018) Paul Allen Wants to Teach Machines Common Sense, https://www.nytimes.com/2018/02/28/technology/paul-allen-ai-common-sense.html. Accessed 10 September 2021.
- 208.
Tandon et al. 2017, p. 49.
References
Abhang P et al (2016) Introduction to EEG- and speech-based emotion recognition. Elsevier Inc, London.
Alonso E (2014) Actions and agents. In: Frankish K, Ramsey W (eds) The Cambridge Handbook of Artificial Intelligence. Cambridge University Press, Cambridge.
Alpaydin E (2016) Machine Learning: The New AI. MIT Press, Cambridge.
Amit Y et al (2021) Object Detection. In: Ikeuchi K (ed) Computer Vision—A Reference Guide. Springer, Boston.
Barrett LF et al (2019) Emotional Expressions Reconsidered. Psychological Science in the Public Interest 20(1):1–68.
Bartlett M et al (2005) Toward Automatic Recognition of Spontaneous Facial Actions. In: Ekman P, Rosenberg E (eds) What the Face Reveals. OUP, Oxford.
Bellman R (1978) An Introduction to Artificial Intelligence: Can computers think? Boyd & Faser, San Francisco.
Bernhardt C (2016) Turing’s Vision: The Birth of Computer Science. MIT Press, Cambridge.
Bishop C (2006) Pattern Recognition and Machine Learning. Springer, New York.
Calders T, Custers B (2013) What is Data Mining and How Does it Work? In: Custers B et al (eds) Discrimination and Privacy in the Information Society. Springer, Berlin.
Calix R et al (2012) Detection of Affective States from Text and Speech For Real-Time Human-Computer Interaction. Human Factors and Ergonomics Society 54(4):530–545.
Calvo R et al (2015) Introduction to Affective Computing. In: Calvo R et al (eds) The Oxford Handbook of Affective Computing. OUP, Oxford.
Chen A, Hao K (2020) Emotion AI researchers say overblown claims give their work a bad name. https://www.technologyreview.com/2020/20/14/844765/ai-emotion-recognition-affective-computing-hirevue-regulation-ethics//. Accessed 10 September 2021
Chow T, Cho SY (2007) Neural Networks and Computing: Learning Algorithms and Applications. Imperial College Press, London
Chuang Z, Wu C (2004) Multi-Modal Emotion Recognition from Speech and Text. (2004) Vol. 9 No. 2 Computational Linguistics and Chinese Language Processing 9(2):45–62.
Cohn J, De La Torre F (2015) Automated Face Analysis for Affective Computing. In: Calvo R et al (eds) The Oxford Handbook of Affective Computing. OUP, Oxford.
Das S et al (2015) Applications of Artificial Intelligence in Machine Learning: Review and Prospect. 15), IJCA 115(9):31–41.
Davis E, Morgenstern L (2004) Introduction: Progress in formal commonsense reasoning. Artificial Intelligence 153:1–12.
De Laat P (2017) Algorithmic Decision-Making based on Machine Learning from Big Data: Can Transparency restore Accountability. Philos. Technol 31(4):525–541.
Deng L, Liu Y (2018) A Joint Introduction to Natural Language Processing and Deep Learning. In: Deng L, Liu Y (eds) Deep learning in natural language processing. Springer, Singapore.
Deng L, Liu Y (2018) Epilogue: Frontiers of NLP in the Deep Learning Era. In: Deng L, Liu Y (eds) Deep learning in natural language processing. Springer, Singapore.
Engelbrecht A (2007) Computational Intelligence – An Introduction. John Wiley & Sons, Hoboken.
Eyal A (2014) Reasoning and decision making. In: Frankish K, Ramsey W (eds) The Cambridge Handbook of Artificial Intelligence. Cambridge University Press, Cambridge.
Fayek H et al (2017) Evaluating deep learning architectures for Speech Emotion Recognition. Neural Networks 92:60–68.
François-Lavet V et al (2018) An Introduction to Deep Reinforcement Learning. Foundations and Trends in Machine Learning 11 (3-4):2–140.
Franklin S (2014) History, motivations, and core themes. In: Frankish K, Ramsey W (eds) The Cambridge Handbook of Artificial Intelligence. Cambridge University Press, Cambridge.
Gavanelli M, Mancini T (2013) Automated Reasoning. Intelligenza Artificiale 7(2):113–124.
Goldberg Y (2017) Neural Network Methods in Natural Language Processing. Morgan & Claypool Publishers, San Rafael.
Goodfellow I et al (2016) Deep Learning. MIT Press, Cambridge www.deeplearningbook.org Accessed 10 September 2021.
Harrison J (2009) Handbook of Practical Logic and Automated Reasoning. Cambridge University Press, Cambridge.
Hastie T et al (2008) The Elements of Statistical Learning. Springer, New York.
Heigold G et al (2015) End-to-End Text-Dependent Speaker Verification. https://arXiv.org/pdf/1509.08062.pdf Accessed 10 September 2021.
Hernández-Orallo J (2017) Evaluation in Artificial Intelligence: from task-oriented to ability-oriented measurement. Artificial Intelligence Review 48:397–447.
Hourri S, Kharroubi J (2020) A deep learning approach for speaker recognition. International Journal of Speech and Technology 23(1):123–131.
Jampani V (2017) Learning Inference Models for Computer Vision. https://publikationen.uni-tuebingen.de/xmlui/handle/10900/76486 Accessed 10 September 2021.
Jebelean T et al (2009) Automated Reasoning. In: Buchberger B et al (eds) Hagenberg Research. Springer, Berlin.
Jurafsky D, Martin J (2014) Speech and Language Processing. Pearson Education Limited, New Jersey.
Kanjo E et al (2015) Emotions in context: examining pervasive affective sensing systems, applications, and analyses. Pers Ubiquit Comput 19:1197–1212.
Keltner D et al (2019) Emotional Expression: Advances in Basic Emotion Theory. Journal of Nonverbal Behaviour 43(2):133–160.
Kline R (2011) Cybernetics, Automata Studies, and the Dartmouth Conference on Artificial Intelligence. IEEE 33(4):5–16.
Kotu V, Bala D (2019) Data Science. Elsevier, Cambridge.
Kovač V et al (2019) Frame-based classification for cross-speed gait recognition. Multimedia Tools and Applications 78:5621–5643.
Kumar D et al (2019) Detecting item interaction and movement US Patent Number US 10268983. https://patentimages.storage.googleapis.com/01/0b/6e/de57009f5670ae/US20150019391A1.pdf Accessed 10 September 2021.
Lee C et al (2015) Speech in Affective Computing. The Oxford Handbook of Affective Computing. OUP, Oxford.
Li St, Jain A (2011) Introduction. In: Li S, Jain A (eds) Handbook of Face Recognition. Springer, London.
Marechal C et al (2019) Survey on AI-Based Multimodal Methods for Emotion Detection. In: Kołodziej J, González-Vélez H (eds) High-Performance Modelling and Simulation for Big Data Applications. Springer, Cham.
Marsland S (2015) Machine Learning: An Algorithmic Perspective. Chapman & Hall, Boca Raton.
Mary L (2019) Extraction of Prosody for Automatic Speaker, Language, Emotion and Speech Recognition. Springer International, Cham.
Mavroudis V, Veale M (2018) Eavesdropping Whilst You’re Shopping: Balancing Personalisation and Privacy in Connected Retail Spaces. https://doi.org/10.1049/cp.2018.0018.
Metz P (2018) Paul Allen Wants to Teach Machines Common Sense. https://www.nytimes.com/2018/02/28/technology/paul-allen-ai-common-sense.html. Accessed 10 September 2021
Minsky M (1968) Semantic Information Processing. MIT Press, Cambridge.
Mitchell T (2006) The discipline of Machine Learning. http://www.cs.cmu.edu/~tom/pubs/MachineLearning.pdf Accessed 10 September 2021.
Mohri M et al (2012) Foundations of Machine Learning. MIT Press, Cambridge.
Moses D et al (2019) Real-time decoding of question-and-answer speech dialogue using human cortical activity. Nature Communication 10:1-14.
Munakata T (2008) Fundamentals of the New Artificial Intelligence. Springer, London.
Murphy K (2012) Machine Learning: A Probabilistic Perspective. MIT Press, Cambridge.
Nielsen M (2015) Neural Networks and Deep Learning. Determination Press, http://neuralnetworksanddeeplearning.com/chap5.html Accessed 10 September 2021.
Nilsson N (2010) The Quest for Artificial Intelligence: A History of Ideas and Achievements. Cambridge University Press, Cambridge.
Pascu L (2019) New Kairos Facial Recognition Camera Offers Customer Insights. https://www.biometricupdate.com/201909/new-kairos-facial-recognition-camera-offers-customer-insights. Accessed 10 September 2021
Picard R (1997) Affective Computing. MIT Press, Cambridge.
Regan J (2016) New Zealand passport robot tells applicant of Asian descent to open eyes. https://www.reuters.com/article/us-newzealand-passport-error/new-zealand-passport-robot-tells-applicant-of-asian-descent-to-open-eyes-idUSKBN13W0RL. Accessed 10 September 2021
Rosenberg E (2005) Introduction: The Study of Spontaneous Facial Expressions in Psychology. In: Ekman P, Rosenberg E (eds) What the Face Reveals. OUP, Oxford.
Rumelhart D et al (1986) Learning representations by backpropagating errors. Nature 323:533–536.
Russell S, Norvig P (2016) Artificial Intelligence: A Modern Approach. Pearson Education Limited, Essex.
Serban I et al (2017) A Deep Reinforcement Learning Chatbot, https://arXiv.org/pdf/1709.02349.pdf Accessed 10 September 2021.
Shi Z 2011 Advanced Artificial Intelligence World Scientific Singapore 2011.
Shoham Y et al (2018) The AI Index 2018 Annual Report. https://hai.stanford.edu/sites/default/files/2020-10/AI_Index_2018_Annual_Report.pdf Accessed 10 September 2021.
Sobin C, Alpert M (1999) Emotion in Speech: The Acoustic Attributes of Fear, Anger, Sadness, and Joy. Journal of Psycholinguistic Research 28(4):347–365.
Sokolova A, Konushin A (2019) Methods of Gait Recognition in Video. Programming and Computer Software 45(4):213–220.
Strauß S (2018) From Big Data to Deep Learning: A Leap Towards Strong AI or Intelligentia Obscura. BDCC 2(3):2–16.
Sturari M et al (2016) Robust and affordable retail customer profiling by vision and radio beacon sensor fusion. Pattern Recognition Letters 81:30–40.
Szeliski R (2011) Computer Vision: Algorithms and Applications. Springer, London.
Tandon N et al (2017) Commonsense Knowledge in Machine Intelligence. SIGMOD Record 46(4):49–52.
Tomba K et al (2018), Stress Detection Through Speech Analysis. ICETE 1:394–398.
Tome P et al (2015) Facial soft biometric features for forensic face recognition. Forensic Science International 257:271–284.
Trigueros D et al (2018) Face recognition: From Traditional to Deep Learning Methods. https://arXiv.org/pdf/1811.00116.pdf Accessed 10 September 2021.
Tur G et al (2018) Deep Learning in Conversational Language Understanding. In: Deng L, Liu Y (eds) Deep learning in natural language processing. Springer, Singapore.
Turing A (1950) Computing Machinery and Intelligence. Mind LIX(236):433–460.
Tzirakis P et al (2015) End-to-End Multimodal Emotion Recognition using Deep Neural Networks. Journal of Latex Class Files 14(8):1–12.
Usuelli M (2014) R machine learning essentials. Packt Publishing, Birmingham.
Valstar M (2015) Automatic Facial Expression Analysis. In: Mandal M, Awasthi A (eds) Understanding Facial Expressions in Communication. Springer, New Delhi.
Warwick K (2012) Artificial Intelligence: The basics. Routledge, New York.
Welinder Y, Palmer A (2018) Face Recognition, Real-Time Identification, and Beyond. In: Selinger E et al (eds) The Cambridge Handbook of Consumer Privacy. Cambridge University Press, Cambridge.
Yang M et al (2015) Speech Reconstruction from Human Auditory Cortex with Deep Neural Networks. https://www.isca-speech.org/archive_v0/interspeech_2015/papers/i15_1121.pdf Accessed 10 September 2021.
Yoshida S (2011) Computer Vision. Nova Science Publisher Inc, Lancaster.
Zbancioc M, Feraru S (2015) A study about the automatic recognition of the anxiety emotional state using Emo-DB. DOI: https://doi.org/10.1109/EHB.2015.7391506.
Zuiderveen F (2019) Discrimination, artificial intelligence, and algorithmic decision-making. https://rm.coe.int/discrimination-artificial-intelligence-and-algorithmic-decision-making/1680925d73 Accessed 10 September 2021.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 T.M.C. Asser Press and the authors
About this chapter
Cite this chapter
Häuselmann, A. (2022). Disciplines of AI: An Overview of Approaches and Techniques. In: Custers, B., Fosch-Villaronga, E. (eds) Law and Artificial Intelligence. Information Technology and Law Series, vol 35. T.M.C. Asser Press, The Hague. https://doi.org/10.1007/978-94-6265-523-2_3
Download citation
DOI: https://doi.org/10.1007/978-94-6265-523-2_3
Published:
Publisher Name: T.M.C. Asser Press, The Hague
Print ISBN: 978-94-6265-522-5
Online ISBN: 978-94-6265-523-2
eBook Packages: Law and CriminologyLaw and Criminology (R0)