Skip to content

Commit 4cbd566

Browse files
authored
Merge pull request #4571 from Liam-DeVoe/claude-code-blog
Add `A Claude Code command for Hypothesis` blog post
2 parents 86d8a4d + 125b908 commit 4cbd566

File tree

9 files changed

+300
-24
lines changed

9 files changed

+300
-24
lines changed

agents/hypothesis.md

Lines changed: 92 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,92 @@
1+
---
2+
description: Write property-based tests with Hypothesis
3+
---
4+
5+
You are an expert developer of property-based tests, specifically using Hypothesis. Your goal is to identify and implement a small number of the most valuable Hypothesis tests that would benefit an existing codebase right now. You focus on clarity and maintainability, as your code will be reviewed by a developer. Your goal is to write precise tests, not comprehensive test suites.
6+
7+
Create and follow this todo list using the `Todo` tool:
8+
9+
1. [ ] Explore the provided code and identify valuable properties.
10+
2. [ ] For each property, explore how its related code is used.
11+
3. [ ] Write Hypothesis tests based on those properties.
12+
4. [ ] Run the new Hypothesis tests, and reflect on the result.
13+
14+
## 1. Explore the code provided and identify valuable properties
15+
16+
First, explore the provided code, and identify valuable properties to test. A "valuable property" is an invariant or property about the code that is valuable to the codebase right now and that a knowledgeable developer for this codebase would have written a Hypothesis test for. The following are indicative of a valuable property:
17+
18+
- Would catch important bugs: Testing this property would reveal bugs that could cause serious issues.
19+
- Documents important behavior: The property captures essential assumptions or guarantees that are important to future or current developers.
20+
- Benefits significantly from Hypothesis: The property is concisely and powerfully expressed as a Hypothesis test, rather than a series of unit tests.
21+
22+
Keep the following in mind:
23+
24+
- Only identify properties that you strongly believe to be true and that are supported by evidence in the codebase, for example in docstrings, comments, code use patterns, type hints, etc. Do not include properties you are at all unsure about.
25+
- Each property should provide a substantial improvement in testing power or clarity when expressed as a Hypothesis test, rather than a unit test. Properties which could have been equally well tested with a unit test are not particularly valuable.
26+
- You may come across many possible properties. Your goal is to identify only a small number of the most valuable of those properties that would benefit the codebase right now.
27+
28+
If the provided code is large, focus on exploring in this order:
29+
30+
1. Public API functions/classes
31+
2. Well-documented implementations of core functionality
32+
3. Other implementations of core functionality
33+
4. Internal/private helpers or utilities
34+
35+
Here are some examples of typical properties:
36+
37+
- Round-trip property: `decode(encode(x)) = x`, `parse(format(x)) = x`.
38+
- Inverse relationship: `add/remove`, `push/pop`, `create/destroy`.
39+
- Multiple equivalent implementations: Optimized vs reference implementation, complicated vs simple implementation.
40+
- Mathematical property: Idempotence `f(f(x)) = f(x)`, commutativity `f(x, y) = f(y, x)`.
41+
- Invariants: `len(filter(x)) <= len(x)`, `set(sort(x)) == set(x)`.
42+
- Confluence: the order of function application doesn't matter (for example, in compiler optimization passes).
43+
- Metamorphic property: some relationship between `f(x)` and `g(x)` holds for all x. For example, `sin(π − x) = sin(x)`.
44+
- Single entry point. If a library has a narrow public API, a nice property-based test simply calls the library with valid inputs. Common in parsers.
45+
46+
While the following should generally not be tested:
47+
48+
- Obvious code wrappers
49+
- Implementation details
50+
51+
The user has provided the following guidance for where and how to add Hypothesis tests: <user_input>$ARGUMENTS</user_input>.
52+
53+
- If the user has provided no direction, explore the entire codebase.
54+
- If the user has provided a specific module, explore that module.
55+
- If the user has provided a specific file, explore that file.
56+
- If the user has provided a specific function, explore that function.
57+
- If the user has given more complex guidance, follow that instead.
58+
59+
If you don't identify any valuable properties during exploration, that's fine; just tell the user as much, and then stop.
60+
61+
At the end of this step, you should tell the user the small list of the most valuable properties that you intend to test.
62+
63+
## 2. For each valuable property, explore how its related code is used
64+
65+
Before writing Hypothesis tests, explore how the codebase uses the related code of each valuable property. For example, if a property involves a function `some_function`, explore how the codebase calls `some_function`: what kinds of inputs are passed to it? in what context? etc. This helps correct any misunderstanding about the property before writing a test for it.
66+
67+
## 3. Write Hypothesis tests based on those properties.
68+
69+
For each property, write a new Hypothesis test for it, and add it to the codebase's test suite, following its existing testing conventions.
70+
71+
When writing Hypothesis tests, follow these guidelines:
72+
73+
- Each Hypothesis test should be both sound (tests only inputs the code can actually be called with) and complete (tests all inputs the code can actually be called with). Sometimes this is difficult. In those cases, prefer sound and mostly-complete tests; stopping at 90% completeness is better than over-complicating a test.
74+
- Only place constraints on Hypothesis strategies if required by the code. For example, prefer `st.lists(...)` (with no size bound) to `st.lists(..., max_size=100)`, unless the property explicitly happens to only be valid for lists with no more than 100 elements.
75+
76+
## 4. Run the new Hypothesis tests, and reflect on the result.
77+
78+
Run the new Hypothesis tests that you just added. If any fail, reflect on why. Is the test failing because of a genuine bug, or because it's not testing the right thing? Often, when a new Hypothesis test fails, it's because the test generates inputs that the codebase assumes will never occur. If necessary, re-explore related parts of the codebase to check your understanding. You should only report that the codebase has a bug to the user if you are truly confident, and can justify why.
79+
80+
# Hypothesis Reference
81+
82+
Documentation reference (fetch with the `WebFetch` tool if required):
83+
84+
- **Strategies API reference**: https://hypothesis.readthedocs.io/en/latest/reference/strategies.html
85+
- **Other API reference**: https://hypothesis.readthedocs.io/en/latest/reference/api.html
86+
- Documents `@settings`, `@given`, etc.
87+
88+
These Hypothesis strategies are under-appreciated for how effective they are. Use them if they are a perfect or near-perfect fit for a property:
89+
90+
- `st.from_regex`
91+
- `st.from_lark` - for context-free grammars
92+
- `st.functions` - generates arbitrary callable functions

hypothesis-python/tests/watchdog/test_database.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -64,6 +64,7 @@ def test_database_listener_directory():
6464
stateful_step_count=10,
6565
# expensive runtime makes shrinking take forever
6666
phases=set(Phase) - {Phase.shrink},
67+
deadline=None,
6768
),
6869
)
6970

Lines changed: 108 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,108 @@
1+
---
2+
date: 2025-10-21 00:00
3+
title: A Claude Code command for Hypothesis
4+
author: Liam DeVoe, Muhammad Maaz, Zac Hatfield-Dodds, Nicholas Carlini
5+
---
6+
7+
<div class="cta-buttons">
8+
<a href="https://github.com/hypothesisworks/hypothesis/agents/hypothesis.md" class="cta-button">
9+
<img src="/theme/icon-code.svg" alt="" class="cta-icon">
10+
View the command
11+
</a>
12+
<a href="https://mmaaz-git.github.io/agentic-pbt-site/" class="cta-button">
13+
<img src="/theme/icon-paper.svg" alt="" class="cta-icon">
14+
Read the paper
15+
</a>
16+
</div>
17+
18+
*We wrote a paper using Claude to autonomously write and run Hypothesis tests, and found real bugs in numpy, pandas, and other packages. We've extracted this to a Claude Code command for writing Hypothesis tests, which we're sharing today. We hope you find it useful.*
19+
20+
*(Not familiar with property-based testing? [Learn more here](https://increment.com/testing/in-praise-of-property-based-testing/)).*
21+
22+
---
23+
24+
Hypothesis has shipped with [the ghostwriter](https://hypothesis.readthedocs.io/en/latest/reference/integrations.html#ghostwriter) for quite a while, which automatically writes Hypothesis tests for your code. It uses nothing but good old fashioned heuristics, and is a nice way to stand up Hypothesis tests with minimal effort.
25+
26+
Recently, we explored what this same idea might look like with modern AI tools, like Anthropic's Claude Sonnet 4.5 and OpenAI's GPT-5, and the results have been pretty compelling. So we're happy to release `/hypothesis`, a [Claude Code](https://www.claude.com/product/claude-code) command that we developed to automate writing Hypothesis tests.
27+
28+
The `/hypothesis` command instructs the model to automatically read your code, infer testable properties, and add Hypothesis tests to your test suite. The idea is that if you wanted to add Hypothesis tests for a file `mypackage/a/utils.py`, you could run `/hypothesis mypackage/a/utils.py`, go get a coffee, and then come back to see some new newly-added tests. You can alternatively give more complex instructions, like `/hypothesis focus on the database implementation; add tests to test_db.py`.
29+
30+
We've found `/hypothesis` pretty useful when combined with modern AI models, for tasks ranging from setting up tests in fresh repositories, to augmenting existing test suites, to standing up a full fuzzing workflow with [HypoFuzz](https://hypofuzz.com/).
31+
32+
Since `/hypothesis` doesn't (yet) make sense to release in Hypothesis itself, we're releasing it here. [You can find the full command here](https://github.com/hypothesisworks/hypothesis/agents/hypothesis.md), install it by copying into `~/.claude/commands/`, and run it with `/hypothesis` inside of Claude Code[^1].
33+
34+
# Designing the `/hypothesis` command
35+
36+
The broad goal of the `/hypothesis` command is to: (1) look at some code; (2) discover properties that make sense to test; and (3) write Hypothesis tests for those properties.
37+
38+
As many developers will attest, often the trickiest part of property-based testing is figuring out what property to test. This is true for modern AI models as well. We therefore design the instructions of `/hypothesis` around gathering as much context about potential properties as it can, before writing any tests. This ensures that the tests the model writes are strongly supported by factual evidence, for example in type hints, docstrings, usage patterns, or existing unit tests.
39+
40+
The flow of the `/hypothesis` instructions looks like this:
41+
42+
1. Explore the provided code and identify candidate properties.
43+
2. Explore how the codebases calls that code in practice.
44+
3. Grounded in this understanding, write corresponding Hypothesis tests.
45+
4. Run the new Hypothesis tests, and reflect on any failures. Is it a genuine bug, or is the test incorrect? Refactor the test if necessary.
46+
47+
The legwork that `/hypothesis` instructs the model to do both before and after writing a test is critical for deriving high-quality tests. For example, the model might discover in step 2 that a function is called with two different input formats, and both should be tested. Or it might discover in step 4 that it wrote an unsound test, by generating test inputs the function didn't expect, like `math.nan`.
48+
49+
## Failure modes
50+
51+
We observed a few failure modes while developing `/hypothesis`. For example, AI models like to write strategies with unnecessary restrictions, like limiting the maximum length of a list even when the property should hold for all lengths of lists. We added explicit instructions in `/hypothesis` not to do this, though that doesn't appear to have fixed the problem entirely.
52+
53+
By far the most fundamental failure mode is that the model might simply misunderstand a property in the code. For example, we ran `/hypothesis` on [python-dateutil](https://github.com/dateutil/dateutil); specifically, `/hypothesis src/easter.py`. The model determined that a property of the `easter` function is that it should always return a date on a Sunday, no matter the `method` argument, of which dateutil provides three: `method=EASTER_JULIAN`, `method=EASTER_ORTHODOX`, `method=EASTER_WESTERN`. The model wrote a test saying as much, which then failed, and it proudly claimed it had found a bug.
54+
55+
In fact, the model had not found a bug. In reality, `dateutil.easter` computes the date for Easter in the calendar corresponding to the passed `method`, but always returns that date in the Gregorian calendar—which might not be a Sunday. The test written by the model assumed the computation occurred in the Gregorian calendar from start to finish, which was incorrect.
56+
57+
This kind of subtle semantic reasoning remains difficult for models, and it's important to keep it in mind as a limitation.
58+
59+
# Using `/hypothesis` for bug hunting
60+
61+
Armed with a test-writing command, one natural extension is to use it to find real bugs in open-source repositories. To explore this, we used Claude Opus 4.1 to automatically write and run Hypothesis tests for a number of popular Python packages. The results were promising—we found bugs in NumPy, pandas, and Google and Amazon SDKs, and [submitted](https://github.com/numpy/numpy/pull/29609) [patches](https://github.com/aws-powertools/powertools-lambda-python/pull/7246) [for](https://github.com/aws-cloudformation/cloudformation-cli/pull/1106) [several](https://github.com/huggingface/tokenizers/pull/1853) of them. You can [read more in our paper](https://mmaaz-git.github.io/agentic-pbt-site/); it's quite short, so do give it a read if you're interested.
62+
63+
It's insightful to walk through one bug we found in particular: a bug in [NumPy's `numpy.random.wald`](https://numpy.org/doc/stable/reference/random/generated/numpy.random.wald.html) function (also called the inverse Gaussian distribution).
64+
65+
To start, we ran `/hypothesis numpy.random` to kick off the model. This directs the model to write tests for the entire `numpy.random` module. The model starts by reading the source code of `numpy.random` as well as any relevant docstrings. It sees the function `wald`, realizes from its background knowledge that the mathematical `wald` function should only produce positive values, and tracks that as a potential property. It reads further and discovers from the docstring of `wald` that both the `mean` and `scale` parameters must be greater than 0.
66+
67+
Based on this understanding, and a few details from docstrings that we've omitted, the model proposes a range of properties:
68+
69+
1. All outputs of `wald` are positive.
70+
2. No `math.nan` or `math.inf` values are returned on valid inputs.
71+
3. The returned array shape matches the `size` parameter.
72+
4. The `mean` and `scale` arrays broadcasts correctly.
73+
5. Seeding the distribution produces deterministic results.
74+
75+
And then goes about writing Hypothesis tests for them. Here's one of the (slightly formatted) tests it writes:
76+
77+
```python
78+
import numpy as np
79+
80+
from hypothesis import given, strategies as st
81+
82+
positive_floats = st.floats(
83+
min_value=1e-10, max_value=1e6, allow_nan=False, allow_infinity=False
84+
)
85+
86+
87+
@given(
88+
mean=positive_floats,
89+
scale=positive_floats,
90+
size=st.integers(min_value=1, max_value=1000),
91+
)
92+
def test_wald_all_outputs_positive(mean, scale, size):
93+
"""Test that all Wald distribution samples are positive."""
94+
samples = np.random.wald(mean, scale, size)
95+
assert np.all(samples > 0), f"Found non-positive values: {samples[samples <= 0]}"
96+
```
97+
98+
It then runs this test. And the test fails! After reflection, the model decides this is a real bug, leaves the test in the test suite, and reports the failure to the developer.
99+
100+
What's going on here? We tracked this bug down to catastrophic cancellation in NumPy's `wald` implementation, which could sometimes result in negative values. We reported this to the NumPy maintainers alongside a patch with a more numerically stable algorithm. The NumPy maintainers confirmed the bug, and our fix was released in [v2.3.4](https://github.com/numpy/numpy/releases/tag/v2.3.4). You can [check out the PR here](https://github.com/numpy/numpy/pull/29609).
101+
102+
We think this is a really neat confirmation of both the power of property-based testing, and the ability of current AI models to reason about code.
103+
104+
# Conclusion
105+
106+
We hope you find `/hypothesis` useful in adding Hypothesis tests to your test suites! Developing AI prompts is more of an art than a science; so we encourage you to give any feedback on `/hypothesis` by [opening an issue in the Hypothesis repository](https://github.com/HypothesisWorks/hypothesis/issues/new), even if it's just some open-ended thoughts.
107+
108+
[^1]: While Claude Code is currently the most popular tool that supports custom commands, `/hypothesis` is just a markdown file, and works equally as well with any AI framework that supports commands.

website/pelicanconf.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -44,8 +44,8 @@
4444
PROFILE_IMAGE_URL = "/dragonfly-rainbow.svg"
4545

4646
MENUITEMS = (
47-
("Articles", "/articles"),
48-
("Documentation", "https://hypothesis.readthedocs.io/en/latest/"),
47+
("Blog", "/articles"),
48+
("Docs", "https://hypothesis.readthedocs.io/en/latest/"),
4949
("GitHub", "https://github.com/HypothesisWorks/hypothesis/"),
5050
("PyPI", "https://pypi.org/project/hypothesis/"),
5151
)

website/theme/static/icon-code.svg

Lines changed: 3 additions & 0 deletions
Loading
Lines changed: 3 additions & 0 deletions
Loading

website/theme/static/prism.css

Lines changed: 11 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
/* PrismJS 1.17.1 https://prismjs.com/download.html#themes=prism&languages=python
22
* prism.js default theme for JavaScript, CSS and HTML by Lea Verou, based on dabblet.com
3-
* Modified by Zac Hatfield-Dodds; removed background etc.
3+
* Modified by Zac Hatfield-Dodds; removed background, match github's python colors closer, etc.
44
*/
55

66
pre[class*="language-"]::-moz-selection,
@@ -28,11 +28,11 @@ pre[class*="language-"] * {
2828
.token.prolog,
2929
.token.doctype,
3030
.token.cdata {
31-
color: slategray;
31+
color: #6a737d;
3232
}
3333

3434
.token.punctuation {
35-
color: #999;
35+
color: #24292e;
3636
}
3737

3838
.namespace {
@@ -46,7 +46,7 @@ pre[class*="language-"] * {
4646
.token.constant,
4747
.token.symbol,
4848
.token.deleted {
49-
color: #905;
49+
color: #005cc5;
5050
}
5151

5252
.token.selector,
@@ -55,32 +55,33 @@ pre[class*="language-"] * {
5555
.token.char,
5656
.token.builtin,
5757
.token.inserted {
58-
color: #690;
58+
color: #032f62;
5959
}
6060

6161
.token.operator,
6262
.token.entity,
6363
.token.url,
6464
.language-css .token.string,
6565
.style .token.string {
66-
color: #9a6e3a;
66+
color: #24292e;
6767
}
6868

6969
.token.atrule,
7070
.token.attr-value,
7171
.token.keyword {
72-
color: #07a;
72+
color: #d73a49;
7373
}
7474

7575
.token.function,
76-
.token.class-name {
77-
color: #dd4a68;
76+
.token.class-name,
77+
.token.decorator {
78+
color: #6f42c1;
7879
}
7980

8081
.token.regex,
8182
.token.important,
8283
.token.variable {
83-
color: #e90;
84+
color: #e36209;
8485
}
8586

8687
.token.important,

0 commit comments

Comments
 (0)