Skip to content

Commit a3db70a

Browse files
authored
0.7.2
0.7.2
2 parents 45e4d59 + c2b8dc6 commit a3db70a

File tree

13 files changed

+42
-56
lines changed

13 files changed

+42
-56
lines changed

Project.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
name = "ActionModels"
22
uuid = "320cf53b-cc3b-4b34-9a10-0ecb113566a3"
33
authors = ["Peter Thestrup Waade [email protected]", "Luke Ring [email protected]", "Malte Lau Møller", "Christoph Mathys [email protected]"]
4-
version = "0.7.1"
4+
version = "0.7.2"
55

66
[deps]
77
ADTypes = "47edcb42-4c32-4615-8424-f2b9edc5f35b"

docs/julia_files/B_user_guide/4_model_fitting.jl

Lines changed: 0 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -115,25 +115,6 @@ chns = sample_posterior!(
115115
resample = true,
116116
);
117117

118-
# ActionModels also provides functionality for saving segments of a chain and then resuming during sampling, so that long sampling runs can be interrupted and resumed later.
119-
# This is done with passing a `SampleSaveResume` object to the `save_resume` keyword argument.
120-
# The `save_every` keyword argument can be used to specify how often the chains should be saved to disk, and the path keyword argument specifies where the chains should be saved.
121-
# Chains are saved with a prefix (by default `ActionModels_chain_segment`) and a suffix that contains the chain and segment number.
122-
# NOTE: this feature is currently experimental, and may change in the future. USe with care.
123-
124-
ActionModels_path = dirname(dirname(pathof(ActionModels))) #hide
125-
docs_path = joinpath(ActionModels_path, "docs") #hide
126-
127-
chns = sample_posterior!(
128-
model,
129-
save_resume = SampleSaveResume(
130-
path = joinpath(docs_path, ".samplingstate"),
131-
save_every = 200,
132-
),
133-
n_samples = 600,
134-
resample = true,
135-
);
136-
137118
# Finally, some users may wish to use Turing's own interface for sampling from the posterior instead.
138119
# The Turing inferface is more flexible in general, but requires more boilerplate code to set up.
139120
# For this case, the `ActionModels.ModelFit` objects contains the Turing model that is used under the hood. Users can extract and use it as any other Turing model, if they wish.

docs/julia_files/C_premade_models/rescorla_wagner.jl

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -364,7 +364,7 @@ model_function = function rescorla_wagner_bernoulli_report(
364364
β = parameters.action_noise
365365

366366
#Transform the action noise into a precision (or inverse noise)
367-
β = 1 / β
367+
β⁻¹ = 1 / β
368368

369369
#Extract Rescorla-Wagner submodel
370370
rescorla_wagner = attributes.submodel
@@ -375,8 +375,8 @@ model_function = function rescorla_wagner_bernoulli_report(
375375
#Extract the expected value from the Rescorla-Wagner submodel
376376
Vₜ = rescorla_wagner.expected_value
377377

378-
#Transform the expected value with a logistic function to get the action probability, weighted by the action precision β
379-
action_probability = logistic(Vₜ * β)
378+
#Transform the expected value with a logistic function to get the action probability, weighted by the action precision β⁻¹
379+
action_probability = logistic(Vₜ * β⁻¹)
380380

381381
#Return the action distribution, which is a Bernoulli distribution with the action probability as parameter
382382
action_distribution = Bernoulli(action_probability)
@@ -446,7 +446,7 @@ model_function = function rescorla_wagner_categorical_report(
446446
β = parameters.action_noise
447447

448448
#Transform the action noise into a precision (or inverse noise)
449-
β = 1 / β
449+
β⁻¹ = 1 / β
450450

451451
#Extract Rescorla-Wagner submodel
452452
rescorla_wagner = attributes.submodel
@@ -457,8 +457,8 @@ model_function = function rescorla_wagner_categorical_report(
457457
#Extract the vector of expected values from the Rescorla-Wagner submodel
458458
Vₜ = rescorla_wagner.expected_value
459459

460-
#Transform the expected value with a logistic function to get the action probability, weighted by the action precision β
461-
action_probabilities = softmax(Vₜ .* β)
460+
#Transform the expected value with a logistic function to get the action probability, weighted by the action precision β⁻¹
461+
action_probabilities = softmax(Vₜ .* β⁻¹)
462462

463463
#Return the action distribution, which is a Bernoulli distribution with the action probability as parameter
464464
action_distribution = Categorical(action_probabilities)

docs/src/images/percact_loop.png

59.1 KB
Loading

docs/src/images/percact_loop_3.svg renamed to docs/src/images/percact_loop.svg

Lines changed: 13 additions & 13 deletions
Loading
File renamed without changes.
File renamed without changes.

docs/src/images/population_model.png

71.6 KB
Loading

docs/src/images/population_model.svg

Lines changed: 4 additions & 4 deletions
Loading

docs/src/markdowns/theory.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ There are various modelling approaches or paradigms within cognitive modelling,
1818
## Terminology: the action model
1919
ActionModels is built conceptually on the action-perception loop, a ubiquitous metaphor in the cognitive and psychological sciences. Here, an agent (for example a human participant in an experiment) is conceptualised as receiving observations $o$ from an environment, and producing actions $a$. The environmental state $\varepsilon$ changes over time, partially dependent on the produced actions, so that the observations $o$ it produces can vary in complex and action-dependent ways. Environments can vary from a pre-defined set of observations (often used in cognitive and neuroscientific experiments) to more complex fully reactive environments. The agent's cognitive state $\vartheta$ also changes over time, partially dependent on observations $o$, and in turn produces actions $a$. Finally, the relation between $o$, $\vartheta$ and $a$ is governed by some cognitive parameters $\Theta$, which differ from the states $\vartheta$ in that parameters $\Theta$ do not change within a given behavioural session. Note that $o$, $a$, $\vartheta$, and $\Theta$ can each be a set of multiple observations, actions, states and parameters, respectively, as well as single values. Finally, it is important to note that technically speaking, in this framework, observations $o$ and actions $a$ can denote *any* causal exchanges with the environment. In behavioural experiments, they often denote sensory inputs and motor actions, respectively. But observations $o$ can in principle include any variables that shape state updates and action selections, such as variables containing information about the experimental and broader context or the subject itself, and actions $a$ can in principle denote any measurable outcome, including reaction times or (neuro)physiological measurements.
2020

21-
![im_action_loop](../images/percact_loop_3.svg)
21+
![im_action_loop](../images/percact_loop.svg)
2222

2323
An *action model* $M_a$ is then a formal and computational model of how, at a single timeset $t$, cognitive states $\vartheta_t$ are updated and actions $a_t$ are generated, given observations $o_t$, some previous cognitive states $\vartheta_{t-1}$ and the cognitive parameters $\Theta$. A classic example of an action model, which will be used throughout this tutorial, is the Rescorla-Wagner reinforcement learning model (Wagner & Rescorla, 1972), with a Gaussian noise report action. Here, expectations $V$ about environmental outcomes are updated at each timestep based on observations $o$ and a learning rate $\alpha$:
2424

0 commit comments

Comments
 (0)