(Course grading)
Line 28: Line 28:
 
|-
 
|-
 
|Lecture
 
|Lecture
|Monday
+
|Tuesday (?)
|9:00 - 10:30
+
|8:10 - 9:40
 
|online
 
|online
 
|[[Igor Farkas|Igor Farkaš]]
 
|[[Igor Farkas|Igor Farkaš]]
 
|-
 
|-
 
|Seminar
 
|Seminar
|Thursday
+
|Thursday (?)
 
|14:00 - 15:30
 
|14:00 - 15:30
 
|online
 
|online
|[[Endre Hamerlik]]  & [[Igor Farkas|Igor Farkaš]]
+
|[[Kristina Malinovska|Kristína Malinovská]]  & [[Igor Farkas|Igor Farkaš]]
 
|}
 
|}
  
Line 50: Line 50:
 
|1.
 
|1.
 
|22.09.  
 
|22.09.  
|What is computational intelligence, basic concepts, relation to artificial intelligence.  [http://dai.fmph.uniba.sk/courses/ICI/References/ci-def.4x.pdf slides]
+
|What is computational intelligence, basic concepts, relation to artificial intelligence.  <!--[http://dai.fmph.uniba.sk/courses/ICI/References/ci-def.4x.pdf slides]-->
 
|Craenen & Eiben (2003); [https://en.wikipedia.org/wiki/Computational_intelligence wikipedia]; R&N (2010), chap.1; Sloman (2002)
 
|Craenen & Eiben (2003); [https://en.wikipedia.org/wiki/Computational_intelligence wikipedia]; R&N (2010), chap.1; Sloman (2002)
 
|-
 
|-
 
|2.
 
|2.
 
|28.09.
 
|28.09.
|Taxonomy of artificial agents, nature of environments. [http://dai.fmph.uniba.sk/courses/ICI/References/ci-agents.4x.pdf slides]
+
|Taxonomy of artificial agents, nature of environments. <!--[http://dai.fmph.uniba.sk/courses/ICI/References/ci-agents.4x.pdf slides]-->
 
|R&N (2010), chap.2
 
|R&N (2010), chap.2
 
|-
 
|-
 
|3.
 
|3.
 
|05.10.
 
|05.10.
|Inductive learning via observations, decision trees. Model selection. [http://dai.fmph.uniba.sk/courses/ICI/References/ci-learning.4x.pdf slides]
+
|Inductive learning via observations, decision trees. Model selection. <!--[http://dai.fmph.uniba.sk/courses/ICI/References/ci-learning.4x.pdf slides]-->
 
|R&N (2010), ch.18.1-3,18.6; Marsland (2015), ch.12 <!--[http://www.r2d3.us/visual-intro-to-machine-learning-part-1/ DT visualization], [https://www.youtube.com/watch?v=LDRbO9a6XPU DT in python], [https://www.youtube.com/watch?v=2s3aJfRr9gE information entropy], [https://www.youtube.com/watch?v=EuBBz3bI-aA bias-variance tradeoff]-->
 
|R&N (2010), ch.18.1-3,18.6; Marsland (2015), ch.12 <!--[http://www.r2d3.us/visual-intro-to-machine-learning-part-1/ DT visualization], [https://www.youtube.com/watch?v=LDRbO9a6XPU DT in python], [https://www.youtube.com/watch?v=2s3aJfRr9gE information entropy], [https://www.youtube.com/watch?v=EuBBz3bI-aA bias-variance tradeoff]-->
 
|-
 
|-
 
|4.
 
|4.
 
|12.10.
 
|12.10.
|Supervised learning in feedforward neural networks (perceptrons), pattern classification, function approximation.  [http://dai.fmph.uniba.sk/courses/ICI/References/ci-fwdnn.4x.pdf slides]
+
|Supervised learning in feedforward neural networks (perceptrons), pattern classification, function approximation.  <!--[http://dai.fmph.uniba.sk/courses/ICI/References/ci-fwdnn.4x.pdf slides]-->
 
|R&N (2010), ch.18.2; Marsland (2015), ch.3-4, Engelbrecht (2007), ch.2-3
 
|R&N (2010), ch.18.2; Marsland (2015), ch.3-4, Engelbrecht (2007), ch.2-3
 
|-
 
|-
 
|5.
 
|5.
 
|19.10.
 
|19.10.
|Unsupervised (self-organizing) neural networks: feature extraction, data visualization.  [http://dai.fmph.uniba.sk/courses/ICI/References/ci-unsup.4x.pdf slides]
+
|Unsupervised (self-organizing) neural networks: feature extraction, data visualization.  <!--[http://dai.fmph.uniba.sk/courses/ICI/References/ci-unsup.4x.pdf slides]-->
 
|Marsland (2015), ch.14, Engelbrecht (2007), ch.4
 
|Marsland (2015), ch.14, Engelbrecht (2007), ch.4
 
|-
 
|-
 
|6.
 
|6.
 
|26.10.
 
|26.10.
|Statistical learning, probabilistic models. [http://dai.fmph.uniba.sk/courses/ICI/References/ci-prob.4x.pdf slides]  
+
|Statistical learning, probabilistic models. <!--[http://dai.fmph.uniba.sk/courses/ICI/References/ci-prob.4x.pdf slides] -->
 
|R&N (2010), ch.13,20.1-2
 
|R&N (2010), ch.13,20.1-2
 
|-
 
|-
Line 85: Line 85:
 
|7.
 
|7.
 
|09.11.
 
|09.11.
|Reinforcement learning I: basic principles and learning methods (TD-learning). Prediction problem.  [http://dai.fmph.uniba.sk/courses/ICI/References/ci-rl.4x.pdf slides]
+
|Reinforcement learning I: basic principles and learning methods (TD-learning). Prediction problem.  <!--[http://dai.fmph.uniba.sk/courses/ICI/References/ci-rl.4x.pdf slides]-->
 
|R&N (2010), ch.21.1-2.  
 
|R&N (2010), ch.21.1-2.  
 
|-
 
|-
Line 95: Line 95:
 
|9.
 
|9.
 
|23.11.
 
|23.11.
|Evolutionary computation: basic concepts, genetic algorithms. [http://dai.fmph.uniba.sk/courses/ICI/References/ci-evol.4x.pdf slides]
+
|Evolutionary computation: basic concepts, genetic algorithms. <!--[http://dai.fmph.uniba.sk/courses/ICI/References/ci-evol.4x.pdf slides]-->
 
|Engelbrecht (2007), ch.8
 
|Engelbrecht (2007), ch.8
 
|-
 
|-
 
|10.
 
|10.
 
|30.11.
 
|30.11.
|Fuzzy systems, fuzzy logic and reasoning. [http://dai.fmph.uniba.sk/courses/ICI/References/ci-fuzzy.4x.pdf slides]
+
|Fuzzy systems, fuzzy logic and reasoning. <!--[http://dai.fmph.uniba.sk/courses/ICI/References/ci-fuzzy.4x.pdf slides]-->
 
|Engelbrecht (2007), ch.20-21; Zadeh (2007)  
 
|Engelbrecht (2007), ch.20-21; Zadeh (2007)  
 
|-
 
|-
 
|11.
 
|11.
 
|07.12.
 
|07.12.
|Explainable artificial intelligence (XAI). [http://dai.fmph.uniba.sk/courses/ICI/References/ci-xai.4x.pdf slides]
+
|Explainable artificial intelligence (XAI). <!--[http://dai.fmph.uniba.sk/courses/ICI/References/ci-xai.4x.pdf slides]-->
 
|Barreto Arrieta A. et al. (2020)
 
|Barreto Arrieta A. et al. (2020)
 
|-
 
|-

Revision as of 20:58, 24 August 2021

Introduction to Computational Intelligence 2-IKV-115a

The course objectives are to make the students familiar with basic principles of various computational methods of data processing that can commonly be called computational intelligence (CI). This includes mainly bottom-up approaches to solutions of (hard) problems based on various heuristics (soft computing), rather than exact approaches of traditional artificial intelligence based on logic (hard computing). Examples of CI are nature-inspired methods (artificial neural networks, evolutionary algorithms, fuzzy systems), as well as probabilistic methods and reinforcement learning. After the course the students will be able to conceptually understand the important terms and algorithms of CI, and choose appropriate method(s) for a given task. The theoretical lectures are combined with the seminar where the important concepts will be discussed and practical examples will be shown.



Course schedule

Type Day Time Room Lecturer
Lecture Tuesday (?) 8:10 - 9:40 online Igor Farkaš
Seminar Thursday (?) 14:00 - 15:30 online Kristína Malinovská & Igor Farkaš

Syllabus

# Date Topic References
1. 22.09. What is computational intelligence, basic concepts, relation to artificial intelligence. Craenen & Eiben (2003); wikipedia; R&N (2010), chap.1; Sloman (2002)
2. 28.09. Taxonomy of artificial agents, nature of environments. R&N (2010), chap.2
3. 05.10. Inductive learning via observations, decision trees. Model selection. R&N (2010), ch.18.1-3,18.6; Marsland (2015), ch.12
4. 12.10. Supervised learning in feedforward neural networks (perceptrons), pattern classification, function approximation. R&N (2010), ch.18.2; Marsland (2015), ch.3-4, Engelbrecht (2007), ch.2-3
5. 19.10. Unsupervised (self-organizing) neural networks: feature extraction, data visualization. Marsland (2015), ch.14, Engelbrecht (2007), ch.4
6. 26.10. Statistical learning, probabilistic models. R&N (2010), ch.13,20.1-2
02.11. Q&A - preparation for midterm Thursday: mid-term test
7. 09.11. Reinforcement learning I: basic principles and learning methods (TD-learning). Prediction problem. R&N (2010), ch.21.1-2.
8. 16.11. Reinforcement learning II (Q, SARSA), actor-critic, control problem, RL for continuous domains. R&N (2010), ch.21.3-5; Woergoetter & Porr (2008).
9. 23.11. Evolutionary computation: basic concepts, genetic algorithms. Engelbrecht (2007), ch.8
10. 30.11. Fuzzy systems, fuzzy logic and reasoning. Engelbrecht (2007), ch.20-21; Zadeh (2007)
11. 07.12. Explainable artificial intelligence (XAI). Barreto Arrieta A. et al. (2020)
15.12. Summary, recap of main concepts, synergies.

Note: Dates refer to lectures, seminars will be on day+3 each week.

References

Course grading

  • Active participation during the lectures/exercises (35%): 15 for lectures, 20 for exercises.
  • Written mid-term test (30%).
  • Final oral exam (30%): you will choose 3 questions, minimum of 1/3 of all points required. We will discuss 3 randomly chosen (by a computer) questions that basically correspond to weekly topics during the semester.
  • Small final project (10%) = implementation of a small neural network (using an existing Python library) and writing a short report. Note: even without this, the student can still get maximum points if s/he has performed very actively. Deadline: 17th January, 2021.
  • Overall grading: A (>90%), B (>80%), C (>70%), D (>60%), E (>50%), Fx (otherwise).