Skip to content

Conversation

jrutila
Copy link

@jrutila jrutila commented Aug 29, 2025

There is newer version 1.3.1, too. But this shouldn't change too much. But it will fix some async running problems I am facing.

It complains that WARNING: behave 1.2.7.dev8 does not provide the extra 'toml'. Is it required for what?

@jrutila
Copy link
Author

jrutila commented Aug 29, 2025

Found a problem in this. The atomicity of db transactions does change, for some reason. With behave=1.2.7.dev6 in simple run the database is reseted for each scenario, as it should be (using the atomic transaction in Django TestCase). But, with behave=1.2.7.dev8 the database is not refreshed for each scenario.

@jrutila
Copy link
Author

jrutila commented Aug 29, 2025

And the same thing happens with behave=1.3.1. So something changed in behave=1.2.7.dev7.

@jrutila
Copy link
Author

jrutila commented Aug 29, 2025

I think the problem lies in environment.py. The before_scenario and after_scenario hooks are only called, if my project's environment file contains a method named before_scenario and after_scenario. So, the code never comes to run_hook with the correct hook name, and thus doesn't call the teardown_test.

Could it be that the way behave calls the hooks has changed?

@jenisys
Copy link
Member

jenisys commented Aug 29, 2025

@jrutila

  • The behave hook-names have not changed.
  • The internal behave.runner.ModelRunner.run_hook() signature has changed: context parameter was removed when Runner.run_hook_with_capture() was introduced. @bittner provided a modification to this monkey-patch to hook into behave hook-processing (AFAIK). Note that the signature of the hooks stayed in the same.
  • The internal ordering in Scenario.run() of when the before_scenario() hook is called changed in relation to the Formatter.scneario() call (AFAIK).

@bittner
Copy link
Member

bittner commented Aug 30, 2025

@bittner
Copy link
Member

bittner commented Aug 31, 2025

It complains that WARNING: behave 1.2.7.dev8 does not provide the extra 'toml'. Is it required for what?

It used to be required for older pre-releases of Behave, up until 1.2.7.dev6, to support reading configuration from pyproject.toml and is now included as an installation dependency by default (for Python versions <3.11, which don't have tomllib yet) and has thus been removed from the installation extras of the Behave package.

See behave/behave#1251 (comment) for a related discussion.

@bittner
Copy link
Member

bittner commented Aug 31, 2025

I think the problem lies in environment.py. The before_scenario and after_scenario hooks are only called, if my project's environment file contains a method named before_scenario and after_scenario. So, the code never comes to run_hook with the correct hook name, and thus doesn't call the teardown_test.

This is a very important observation! Thank you so much for digging this deep. 🫶

We need to find a way how to fix this. We have failing tests proving the changed behavior, but I've not been able to figure out a solution for the problem yet.

Since also fixtures were involved somewhere down the road, I thought the key might be to call Behave's use_fixture as hinted on by Jens in behave/behave#1221 (comment), but that's just a guess. Unfortunately, I lack the time and knowledge to work on this problem, but I opened PR #173 to make the issue more visible.

@jenisys could you suggest what to change on the monkey-patch code to make it work again?

@jenisys
Copy link
Member

jenisys commented Aug 31, 2025

@bittner @jrutila

  • The before_all() hook is always called (if it is defined in the environment)
  • Therefore, when the before_all() is called, check if the before_scenario() and after_scenario() hooks exist in runner.hooks.
  • If the before_scenario() hook (and optionally the after_scenario() hook) does not exist, register your default hook-function in the runner.hooks dictionary w/ the hook-name.
  • The after_scenario() functionality can also be added by using ctx.add_cleanup() in the before_scenario() hook part or by using a fixture there.
  • The fixture-based approach is somewhat cleaner (im my opinion).

POTENTIAL PROBLEM:

  • Is it ensured that a before_all() hooks exists in the environment in all cases ?
  • Otherwise, your primary hook part will not be executed.

SEE ALSO:

@bittner
Copy link
Member

bittner commented Aug 31, 2025

The code in behave-django assumes that run_hook is unconditionally called for all hooks, whether the related hook is defined by the user or not. On a side-note, a should_run_hook function didn't exist prior to Behave v1.2.7.dev7.

IIUC, the logic in run_hook itself hasn't changed despite that, but should_run_hook is used in various places to avoid run_hook to be called. Is it unfair to assume that this has some impact on the execution behavior?

@jenisys
Copy link
Member

jenisys commented Aug 31, 2025

@bittner
The description was how you may be able to fix the observed syndrome.
behave uses now mostly run_hook_with_capture() that uses ModelRunner.should_run_hook() as optimization before it calls run_hook().

Therefore, run_hook() is only called for before_scenario and after_scenario if these hooks was defined in the environment (as long as capture-hooks is enabled).

Note that capture-hook state does not apply to the before_all hook (SPECIAL CASE). Therefore, the run_hook() will always be called for the before_all hook.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants