Planeamento

Aulas Teóricas

Introduction to Artificial Intelligence

Course information: structure, evaluation, bibliography. What does Artificial Intelligence (AI) means. Approaches to defining AI. Foundations of AI. The dawn of AI.

Introduction to AI

Overview of state-of-the-art applications. Concepts of intelligent agent, rational agent, autonomy, learning, with examples.

Introduction to AI

Nature of environments; examples. Properties of task environments. Structure of agents. Learning agents.

Problem solving

Problem and goal formulation. Well-defined problems and solutions. State space. Level of abstraction. Examples of toy and real-world problems. Search trees.

Search methods

Search tree. General search algorithm. Measuring performance. Uninformed search strategies: breadth-first, uniform cost, depth-first, depth-limited, iterative deepening.

Search methods

Short demo and review of some uninformed search methods. Bidirectional search. Dealing with repeated states. Partial information: sensorless, contingency, and exploration problems. Informed search methods. Evaluation and heuristic functions. Best-first: greedy and A*.

Heuristic search

A* search. Admissible heuristics and optimality. Proof of optimality. Dealing with repeated nodes. Consistent heuristics. Memory-bounded heuristic search. Effective branching factor.

Game playing

Heuristic design: relaxed problems, portfolio, subproblem, pattern database. Adversial search: games. Introduction to game theory. Minimax algorithm. Alpha-beta prunning. Transposition tables. Computer game playing: evaluation function and cut-off test. Deep Blue overview.

Constraint satisfaction problems

Representing CSPs. Solving CPS using search methods. Recursive backtracking. Heuristics for variable and value selection. Forward checking and constraint propagation. Local search methods for CSP.

Knowledge and representation

Representing knowledge and reasoning. The wumpus world. The concepts of syntax and semantics in logic. Entailment and logical inference. Soundness and completeness in logic. The grounding problem.

Propositional logic

Syntax and semantics. Inference. Deduction theorem. Reasoning patterns. Examples.

First-order logic

Resolution in propositional logic: conjunctive normal form, proof by contradiction. Horn clauses and logic programming. Introduction to first-order logic: objects, relations, and functions.

First-order logic

Ontological and epistemological commitment. First-order logic: models, syntax. Examples. Converting sentences to conjunctive normal form (CNF).

Resolution in first order logic

Resolution inference rule. Unification. Most general unifier. Factoring. Completeness of resolution. Using resolution to answer questions. Dealing with equality: demodulation and paramodulation.

Situation calculus

Resolution algorithms. Logic-based agent. Situations as objects. Change axioms. Frame problem. Frame axioms. Successor-state axioms.

Planning

Successor-state axioms in situation calculus. Introduction to planning and STRIPS.

STRIPS language and search methods for planning

STRIPS planning language. Example of the blocks world. Forward and backward search for STRIPS. Heuristic design for planning problems.

Partial order planning

Partial versus total order plans. Definition of a partial order plan. Formulating partial order planning as a state-space search problem.

Planning and acting in non-deterministic domains

Sensorless planning. Conditional planning. Execution monitoring and replanning. Continuous planning. Notions of multiagent environments.

Uncertain knowledge and Reasoning

Probabilistic framework. Notions of utility and decision theories. Maximum expected utility. Review of probability theory. Bayes rule. Notions of Bayesian classification.

Bayesian networks

Representing causality in bayesian networks. Computing the joint probability distribution. Inference in Bayesian networks (exact).

Fuzzy logic and decision theoretic agents

Notions of fuzzy logic. Utility. Maximum expected utility principle. Decision networks.

Sequential decision problems

Finite state automata. Markov decision processes. State utility. Policy. Bellman equation.

Sequential decision making

Value iteration. Policy iteration. Notions of POMDP (Partially Observable MPD) and PSR (Predictive State Representation).

Decision trees

Inducing decision trees from examples. Information theory. Information gain applied to attribute choice in decision trees. Training and test sets. Overfitting. Notion of cross-validation and tree pruning.

Introduction to learning

Learning agent architecture. Supervised, unsupervised, and reinforcement learning. Inductive learning. Ockham razor.

Reinforcement learning

Passive reinforcement learner: ADP and TD. Active reinforcement learning: exploration vs. exploitation and Q-learning.

Seminar: Some questions the history of AI asks us

Some examples of the 20th century efforts to produce machine intelligence will be briefly presented. And some general, rather philosophical questions will be raised about the meaning of these experiences. We will suggest that some branches of New Robotics are promising responses to such questions.