|
| 1 | +[](https://ComputationalPsychiatry.github.io/ActionModels.jl/stable/) |
| 2 | +[](https://ComputationalPsychiatry.github.io/ActionModels.jl/dev/) |
| 3 | +[](https://github.com/ComputationalPsychiatry/ActionModels.jl/actions/workflows/CI_full.yml?query=branch%3Amain) |
| 4 | +[](https://codecov.io/gh/ComputationalPsychiatry/ActionModels.jl) |
| 5 | +[](<https://www.gnu.org/licenses/>) |
| 6 | +[](https://github.com/JuliaTesting/Aqua.jl) |
| 7 | + |
| 8 | +# Welcome to ActionModels! |
| 9 | + |
| 10 | +ActionModels is a Julia package for computational modeling of cognition and behaviour. |
| 11 | +It can be used to fit cognitive models to data, as well as to simulate behaviour. |
| 12 | +ActionModels allows for easy specification of hiearchical models, as well as including generalized linear regressions of model parameters, using standard LMER syntax. |
| 13 | +ActionModels makes it easy to specify new models, but also contains a growing library of precreated models, and can easily be extended to include more complex model families. |
| 14 | +The package is designed to have functions for every step of a classic cognitive modelling framework, such as parameter recovery, predictive checks, and extracting trajectories of cognitive states for further analysis. |
| 15 | +Inspired by packages like brms and HBayesDM, ActionModels is designed to provide a flexible but intuitive and easy-to-use interface for a field that is otherwise only accessible to technical experts. It also aims to facilitate thorough and fast development, testing and application of models. |
| 16 | +Under the hood, ActionModels relies on Turing.jl, Julia's powerful framework for probabilistic modelling. However, by relying on Julia's native differentiatiability, users can easily specify custom models withut having to engage directly with Turing's API. |
| 17 | +ActionModels is continuously being developed and optimised within the constraints of cognitive modelling. It allows for parallelizing models across experimental sessions, and can use Turing's composite samplers to estimate both continuous and discrete parameters. |
| 18 | +The documentation covers all three main components of ActionModels: defining cognitive models, fitting them to data, and simulating behaviour, as well as suggestions for debugging models. It alspo includes a theory seciton with an introduction to computaional cognitive modelling and the conceptual framework that ActionModels is built on. |
| 19 | +Finally, the CONTRIBUTING.md files describes how to extend or contribute to ActionModels to include new models. |
| 20 | + |
| 21 | +# Getting Started |
| 22 | + |
| 23 | +First we load the ActionModels package |
| 24 | + |
| 25 | +````julia |
| 26 | +using ActionModels |
| 27 | +```` |
| 28 | + |
| 29 | +We can now quickly define a cognitive model. We write a function that describes the action selection process in a single timestep. |
| 30 | +Here we create the classic Rescorla-Wagner model, with a Gaussian-noise report as action: |
| 31 | + |
| 32 | +````julia |
| 33 | +function rescorla_wagner(attributes::ModelAttributes, observation::Float64) |
| 34 | + #Read in parameters and states |
| 35 | + parameters = load_parameters(attributes) |
| 36 | + states = load_states(attributes) |
| 37 | + |
| 38 | + α = parameters.learning_rate |
| 39 | + β = parameters.action_noise |
| 40 | + Vₜ₋₁ = states.expected_value |
| 41 | + |
| 42 | + #The Rescorla-Wagner update rule updates the expected value V |
| 43 | + #based on the observation and the learning rate α |
| 44 | + Vₜ = Vₜ₋₁ + α * (observation - Vₜ₋₁) |
| 45 | + |
| 46 | + #The updated expected value is stored to be accessed on next timestep |
| 47 | + update_state!(attributes, :expected_value, Vₜ) |
| 48 | + |
| 49 | + #The probability distribution for the action on this timestep |
| 50 | + #is a Gaussian with the expected value V as mean, and a noise parameter β as standard deviation |
| 51 | + action_distribution = Normal(Vₜ, β) |
| 52 | + |
| 53 | + return action_distribution |
| 54 | +end; |
| 55 | +```` |
| 56 | + |
| 57 | +We now create the model object. |
| 58 | +We first define the attributes of the Rescorla Wagner model. This includes it's three parameters, the expected value state, the observation and the action. |
| 59 | +Then we use the `ActionModel` constructor to create the model object. |
| 60 | + |
| 61 | +````julia |
| 62 | +parameters = ( |
| 63 | + #The learning rate, with a default value of 0.1 |
| 64 | + learning_rate = Parameter(0.1), |
| 65 | + #The action noise, with a default value of 1 |
| 66 | + action_noise = Parameter(1), |
| 67 | + #And the initial expected value V₀, with a default value of 0 |
| 68 | + initial_value = InitialStateParameter(0, :expected_value), |
| 69 | +) |
| 70 | +states = (; |
| 71 | + #The expected value V, which is updated on each timestep |
| 72 | + expected_value = State(), |
| 73 | +) |
| 74 | +observations = (; |
| 75 | + #The observation, which is passed to the model on each timestep and used to update V |
| 76 | + observation = Observation() |
| 77 | +) |
| 78 | +actions = (; |
| 79 | + #The report action, which reports the expected value with Gaussian noise |
| 80 | + report = Action(Normal) |
| 81 | +) |
| 82 | + |
| 83 | +action_model = ActionModel( |
| 84 | + rescorla_wagner, |
| 85 | + parameters = parameters, |
| 86 | + states = states, |
| 87 | + observations = observations, |
| 88 | + actions = actions, |
| 89 | +) |
| 90 | +```` |
| 91 | + |
| 92 | +```` |
| 93 | +-- ActionModel -- |
| 94 | +Action model function: rescorla_wagner |
| 95 | +Number of parameters: 3 |
| 96 | +Number of states: 1 |
| 97 | +Number of observations: 1 |
| 98 | +Number of actions: 1 |
| 99 | +
|
| 100 | +```` |
| 101 | + |
| 102 | +We can now read in a dataset. In this example, we will use a simple hand-created dataset, where three participants each have stated predictions after each of 6 observations, under some treatment condition as well as in a control condition. |
| 103 | + |
| 104 | +````julia |
| 105 | +using DataFrames |
| 106 | + |
| 107 | +data = DataFrame( |
| 108 | + observations = repeat([1.0, 1, 1, 2, 2, 2], 6), |
| 109 | + actions = vcat( |
| 110 | + [0, 0.2, 0.3, 0.4, 0.5, 0.6], |
| 111 | + [0, 0.5, 0.8, 1, 1.5, 1.8], |
| 112 | + [0, 2, 0.5, 4, 5, 3], |
| 113 | + [0, 0.1, 0.15, 0.2, 0.25, 0.3], |
| 114 | + [0, 0.2, 0.4, 0.7, 1.0, 1.1], |
| 115 | + [0, 2, 0.5, 4, 5, 3], |
| 116 | + ), |
| 117 | + id = vcat( |
| 118 | + repeat(["A"], 6), |
| 119 | + repeat(["B"], 6), |
| 120 | + repeat(["C"], 6), |
| 121 | + repeat(["A"], 6), |
| 122 | + repeat(["B"], 6), |
| 123 | + repeat(["C"], 6), |
| 124 | + ), |
| 125 | + treatment = vcat(repeat(["control"], 18), repeat(["treatment"], 18)), |
| 126 | +) |
| 127 | + |
| 128 | +show(data) |
| 129 | +```` |
| 130 | + |
| 131 | +```` |
| 132 | +36×4 DataFrame |
| 133 | + Row │ observations actions id treatment |
| 134 | + │ Float64 Float64 String String |
| 135 | +─────┼────────────────────────────────────────── |
| 136 | + 1 │ 1.0 0.0 A control |
| 137 | + 2 │ 1.0 0.2 A control |
| 138 | + 3 │ 1.0 0.3 A control |
| 139 | + 4 │ 2.0 0.4 A control |
| 140 | + 5 │ 2.0 0.5 A control |
| 141 | + 6 │ 2.0 0.6 A control |
| 142 | + 7 │ 1.0 0.0 B control |
| 143 | + 8 │ 1.0 0.5 B control |
| 144 | + ⋮ │ ⋮ ⋮ ⋮ ⋮ |
| 145 | + 30 │ 2.0 1.1 B treatment |
| 146 | + 31 │ 1.0 0.0 C treatment |
| 147 | + 32 │ 1.0 2.0 C treatment |
| 148 | + 33 │ 1.0 0.5 C treatment |
| 149 | + 34 │ 2.0 4.0 C treatment |
| 150 | + 35 │ 2.0 5.0 C treatment |
| 151 | + 36 │ 2.0 3.0 C treatment |
| 152 | + 21 rows omitted |
| 153 | +```` |
| 154 | + |
| 155 | +We can now create a model for estimating parameters hierarchically for each participant. |
| 156 | +We make a regression model where we estimate how much the learning rate and action noise differ between treatment conditions. |
| 157 | +We include a random intercept for each participant, making this a hierarchical model. |
| 158 | +The initial value parameter is not estimated, and is fixed to it's default: 0. |
| 159 | + |
| 160 | +````julia |
| 161 | +population_model = [ |
| 162 | + Regression(@formula(learning_rate ~ treatment + (1 | id)), logistic), #use a logistic link function to ensure that the learning rate is between 0 and 1 |
| 163 | + Regression(@formula(action_noise ~ treatment + (1 | id)), exp), #use an exponential link function to ensure that the action noise is positive |
| 164 | +] |
| 165 | + |
| 166 | +model = create_model( |
| 167 | + action_model, |
| 168 | + population_model, |
| 169 | + data; |
| 170 | + action_cols = :actions, |
| 171 | + observation_cols = :observations, |
| 172 | + session_cols = [:id, :treatment], |
| 173 | +) |
| 174 | +```` |
| 175 | + |
| 176 | +```` |
| 177 | +-- ModelFit object -- |
| 178 | +Action model: rescorla_wagner |
| 179 | +Linear regression population model |
| 180 | +2 estimated action model parameters, 6 sessions |
| 181 | +Posterior not sampled |
| 182 | +Prior not sampled |
| 183 | +
|
| 184 | +```` |
| 185 | + |
| 186 | +We can now fit the model to the data: |
| 187 | + |
| 188 | +````julia |
| 189 | +#Load statsplots for plotting results |
| 190 | +using StatsPlots |
| 191 | + |
| 192 | +#Fit the model to the data |
| 193 | +chns = sample_posterior!(model) |
| 194 | +#We can plot the estimated parameters |
| 195 | +plot(chns[[Symbol("learning_rate.β[1]"), Symbol("learning_rate.β[2]"), Symbol("action_noise.β[1]"), Symbol("action_noise.β[2]")]]) |
| 196 | +```` |
| 197 | + |
| 198 | + |
| 199 | +We can extract the estimated parameters for each participant, and summarize it as a dataframe for further analysis: |
| 200 | + |
| 201 | +````julia |
| 202 | +#Extract the full distribution of parameters for each participant |
| 203 | +parameters_per_session = get_session_parameters!(model) |
| 204 | +#Populate a dataframe with the median of each posterior distribution |
| 205 | +summarized_parameters = summarize(parameters_per_session, median) |
| 206 | + |
| 207 | +show(summarized_parameters) |
| 208 | +#TODO: plot |
| 209 | +```` |
| 210 | + |
| 211 | +```` |
| 212 | +6×4 DataFrame |
| 213 | + Row │ id treatment action_noise learning_rate |
| 214 | + │ String String Float64 Float64 |
| 215 | +─────┼──────────────────────────────────────────────── |
| 216 | + 1 │ A control 0.0540497 0.0830818 |
| 217 | + 2 │ B control 0.1761 0.322529 |
| 218 | + 3 │ C control 2.23036 0.842322 |
| 219 | + 4 │ A treatment 0.0370277 0.0381213 |
| 220 | + 5 │ B treatment 0.122187 0.17241 |
| 221 | + 6 │ C treatment 1.53131 0.701624 |
| 222 | +```` |
| 223 | + |
| 224 | +We can also extract the estimated value of V at each timestep, for each participant: |
| 225 | + |
| 226 | +````julia |
| 227 | +#Extract the estimated trajectory of V |
| 228 | +state_trajectories = get_state_trajectories!(model, :expected_value) |
| 229 | +#Summarize the trajectories |
| 230 | +summarized_trajectories = summarize(state_trajectories, median) |
| 231 | + |
| 232 | +show(summarized_trajectories) |
| 233 | +#TODO: plot |
| 234 | +```` |
| 235 | + |
| 236 | +```` |
| 237 | +42×4 DataFrame |
| 238 | + Row │ id treatment timestep expected_value |
| 239 | + │ String String Int64 Float64? |
| 240 | +─────┼───────────────────────────────────────────── |
| 241 | + 1 │ A control 0 0.0 |
| 242 | + 2 │ A control 1 0.0830818 |
| 243 | + 3 │ A control 2 0.159261 |
| 244 | + 4 │ A control 3 0.229111 |
| 245 | + 5 │ A control 4 0.37624 |
| 246 | + 6 │ A control 5 0.511145 |
| 247 | + 7 │ A control 6 0.634841 |
| 248 | + 8 │ B control 0 0.0 |
| 249 | + ⋮ │ ⋮ ⋮ ⋮ ⋮ |
| 250 | + 36 │ C treatment 0 0.0 |
| 251 | + 37 │ C treatment 1 0.701624 |
| 252 | + 38 │ C treatment 2 0.910971 |
| 253 | + 39 │ C treatment 3 0.973436 |
| 254 | + 40 │ C treatment 4 1.6937 |
| 255 | + 41 │ C treatment 5 1.90861 |
| 256 | + 42 │ C treatment 6 1.97273 |
| 257 | + 27 rows omitted |
| 258 | +```` |
| 259 | + |
| 260 | +Finally, we can also simulate behaviour using the model. |
| 261 | +First we instantiate an Agent object, which produces actions according to the action model. |
| 262 | +Additionally, we can specify which states to save in the history of the agent. |
| 263 | + |
| 264 | +````julia |
| 265 | +#Create an agent object |
| 266 | +agent = init_agent(action_model, save_history = [:expected_value]) |
| 267 | +```` |
| 268 | + |
| 269 | +```` |
| 270 | +-- ActionModels Agent -- |
| 271 | +Action model: rescorla_wagner |
| 272 | +This agent has received 0 observations |
| 273 | +
|
| 274 | +```` |
| 275 | + |
| 276 | +We can set parameter values for the agent, and simulate behaviour for some set of observations |
| 277 | + |
| 278 | +````julia |
| 279 | +#Set the parameters of the agent |
| 280 | +set_parameters!(agent, (learning_rate = 0.8, action_noise = 0.1)) |
| 281 | + |
| 282 | +#Create an increasing set of observations with some noise |
| 283 | +observations = collect(0:0.1:2) .+ randn(21) * 0.1 |
| 284 | + |
| 285 | +#Simulate behaviour |
| 286 | +simulated_actions = simulate!(agent, observations) |
| 287 | + |
| 288 | +#Plot the change in expected value over time |
| 289 | +plot(agent, :expected_value, label = "expected value", ylabel = "Value") |
| 290 | +plot!(observations, linetype = :scatter, label = "observation") |
| 291 | +plot!(simulated_actions, linetype = :scatter, label = "action") |
| 292 | +title!("Change in expected value over time") |
| 293 | +```` |
| 294 | + |
| 295 | + |
| 296 | +--- |
| 297 | + |
| 298 | +*This page was generated using [Literate.jl](https://github.com/fredrikekre/Literate.jl).* |
| 299 | + |
0 commit comments