Projects

  • Project statement v1
  • Essay and exercise v2
    • Mooshak is configured: the platform will be soon available; C, C++, Python and Java are supported
    • Please check input-output cases v1.2 with local corrections in red (older version here)
    • there is no longer the need to implement mixture strategies for agent interaction with more than 2 tasks (a version 2 was uploaded in March 17th where we remove this requirement and redistribute its score; the old version can still be accessed)

Exercise submission

Principles for submiting the exercise:
  • NEW: due to a misty way of accessing the inputs in Mooshak via exceptions, we are now providing all of inputs of Mooshak (note that in Mooshak T24 has "T1=[A=(100%,1)]").
    • we apologize if you made a big effort in previous days to correct the behavior of the agent without accessing the inputs.
  • NEW: for the final assessment: we will soon provide another Mooshak contest with very similar inputs (48 tests with slight variations) but with a limited number of submissions, so we kindly ask you to upload your final version in these contest once released.
  • please consider using Firefox (instead of Chrome) to see the succeeded/failed tasks in Mooshak without size restrictions

  • Mooshak available at https://acp.tecnico.ulisboa.pt/~mooshak
  • Login: please go to https://fenix.tecnico.ulisboa.pt/disciplinas/AASMA26/2018-2019/2-semestre/grupos (Essay) to check your number and then go to https://acp.tecnico.ulisboa.pt/~mooshak/cgi-bin/aasmagetpass to access your username and password
  • you are unable to see the input-output of each test, to guide your debuging consider the following:
  • updating behavior: rational agent sees the utility of her decision, thus the task ID is not given "(1,T1.A)" -> "(1,A)"
  • guarantee your agents puts a newline after an answer to the stdout (see examples or submission template)
  • guarantee the tasks in the output of decide-risk and decide-mixed agents are presented either in alphabetic or input order
  • remember that libraries with game theory facilities are explicitly forbidden
  • to use other external libraries: add the code to dedicated file(s) in your project and insert the comment "INTERNET CODE"
  • submission instructions according to the accepted languages:
    • JAVA
      • create a .JAR file with your .java classes
      • guarantee that the .JAR filename is the same as the .java file where your main method is located
      • compiler: javac 1.8.0
      • example: Agent.jar
    • Python
      • create a .ZIP file with your .py files and a Makefile
      • guarantee that the executable generated by your Makefile is named "exercise"
      • compilers: python 2.7.13 / python 3.5.3
      • example: exercise.zip
    • C and C++
      • create a .ZIP file with your .c or .cpp files and a Makefile
      • guarantee that the executable generated by your Makefile is named "exercise"
      • examples: exercise in C and exercise in C++

FAQ

Please always consult the FAQ before posting questions to the faculty hosts.

Exercise
1. Aren't "(1,T2.A)" and "T2=[A1=(1, -2)]" (at the end of page 1 of version 1)?
Yes. They should be respectively replaced by "(1,T1.A)", "T2=[C=(1, -2)]".

2. I am having a hard time understanding the example "decide-rational (T1=[A=(1,1)],T2=[A=(60%,-1),B=(40%,1)]) 4". Could you expand?
The agent faces two options: pursue task T1 or T2. The probabilities of receiving a specific utility per task either depend on: a) the initial beliefs (for instance, when pursuing T2, the agent believes she will receive an utility of -1 with 60% of probability and an utility of 1 with 40% of probability); orb) the gathered evidence (for instance, the agent received 1 time an utility of 1 in the context of T1, i.e. from evidence the agent believes she will receive an utility of 1 with 100% probability). The agent opts for T1 since it is the task maximimizing expected utility. As a result of executing this task, the agent observes an utility of -3, and updates its internal state to include this new evidence: (T1=[A=(1,1),B=(1,-3)],T2=[A=(60%,-1),B=(40%,1)]).The agent always prefers evidence over initial/preliminary believes. If the agent would opt for T2 and received "(-3,T2.C)", then its internal state would be updated towards "(T1=[A=(1,1),B=(1,-3)],T2=[C=(1,-3)])".

3. Do we need to worry with the identifiers of the options and suboptions?
The identifiers aim to facilitate matching, the agent should trust the identifiers: there is no need to worry with their generation or consistency.

4. Isn't the answer "(1.00,T1)" for the 1st case for risk-averse and "(0.00,T1)" in the 6th case of input-output examples (version 1) incorrect?
Yes. The correct answers are respectively "(1.00,T2)" and "(1.00,T1)".

5. Is the third argument (specifying the number of interactions of the agent with the environment) required for decide-risk, decide-nash and decide-mixed?
No, it is not mandatory (no tests will be executed). Nevertheless, this argument will be considered for decide-rational and decide-conditional. When absent, the default value is 1.

6. Shouldn't the third argument of the example provided at end of page 1 in the exercise-statement be 3 instead of 4?
Yes. This example only shows 3 interactions with the environment.

7. Could you expand the behavior of the risk-averse agent?Consider "decide-risk (T1=[A=(1,2)],T2=[A=(1,-1),B=(1,6)])"T2 has higher expected utility than T1 but can lead to a negative outcome, something the agent wants to avoid.In this context, the agent will distribute effort in a way that maximizes the effort to T2 while guaranteeing no negative outcomes (in accordance with current evidence). Let us consider the following scenarios:- if the agent allocates 0.2 for T1 and 0.8 for T2, then the minimum utility is 0.2*2+0.8*-1=-0.4 => undesirable- if the agent allocates 0.5 for both T1 and T2, then the minimum utility is positive (0.5) yet the maximum expected utility is capped to 2.25
- the best allocation is 0.33 for T1 and 0.67 to T2, because it guarantees a non-negative minimum utility (0.0) and the maximum expected utility is 2.335.
Now consider a different example: "decide-risk (T1=[A=(1,1)],T2=[A=(9,5),B=(1,-10)],T3=[A=(1,5),B=(1,-1)])"
Expected utilities are T1=1, T2=3.5 and T3=2 and minimum utilities are T1=1, T2=-10 and T3=-1.
Accomodating T2 is not worthy given the need to allocate high weight to T1 to compensate for the possible -10 outcome of T2.
To successfully implement the behavior of the risk-averse agent for such hard cases, one needs to find a proper problem formulation based on expected and minimum utilities.
Do not forget that a risk-averse agent wants to diversify its portfolio of tasks: the weight given to a task should be distributed for the tasks with the same utility profile.

8. In the 3th example of the updating behavior in the input-output examples: shouldn't "B1=(3,1)" be "B1=(2,1)"? Yes.
9. Result of example 3 of nash equilibrium (decide-nash) mine=T0,peer=T1 (25,25) shouldn't be mine=T1,peer=T0 (40,40)? Yes, the sum is higher.
8. Please consult the version 1.2 of input-output examples with local changes in red.

9. Are the example of the decide-conditional given in the project statement correct?
There is a wrong value there: please look to the input-output examples or check the Mooshak examples instead.

10. Could you translate one of the given examples in the input-output examples into a bi-matrix? Yes, considering the example 3 of section 3.1:
payoff matrix
Player 2
Player 1
--- T0 --- --- T1 --- --- T2 ---
--- T0 --- 0, 0 25, 40 5, 10
--- T1 --- 40, 25 0, 0 5, 15
--- T2 --- 10, 5 15, 5 10, 10

11. I want to use a light LP solver, any suggestion?
In Java: https://algs4.cs.princeton.edu/code/edu/princeton/cs/algs4/LinearProgramming.java.html
In Python: https://github.com/dmishin/pylinprog