Skip to content

Conversation

oliverchampion
Copy link
Collaborator

@oliverchampion oliverchampion commented Jul 29, 2025

Describe the changes you have made in this PR

Added parallel computing and testing
Added volume fitting and testing
Added wrapper code
Added SUPER-IVIM-DC fit methods (by accident in this PR)

Link this PR to an issue [optional]

Fixes #ISSUE-NUMBER

Checklist

  • [ x ] Self-review of changed code
  • [ x ] Added automated tests where applicable
  • Update Docs & Guides

@oliverchampion
Copy link
Collaborator Author

This is now running and ready for review :)

Copy link
Contributor

@etpeterson etpeterson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Quick look for now. Just a few comments - looking good.


##########
# code written by Oliver J Gurney-Champion
# code adapted from MAtlab code by Eric Schrauben: https://github.com/schrau24/XCAT-ERIC
# This code generates a 4D IVIM phantom as nifti file

def phantom(bvalue, noise, TR=3000, TE=40, motion=False, rician=False, interleaved=False,T1T2=True):
download_data()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This downloads data during testing now, correct? Is that slow or burdensome?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes and no. The volume, parallel and AI tests just last long, full stop. Downloading the data is relatively quick compared to that less than minute Vs hours).

I think there are two ways we can go here.

I think I could accelerate the volume fit by dramatically reducing the number of vowels; there probably is no need for high resolution.

For the parallel fit, the smaller I make the number of vowels/volume, the less of an acceleration we get from going parallel and the harder it is to test this. For small volumes, typically linear goes way faster. That is why I ended up increasing the volume.

So either we can make a large volume (then the download time is neglectable) and do long tests. Or, we can go for a small volume and ignore the parallel timing. In that cases precalculting the volume and saving it will be faster.

That said, the AI tests also last 5+ minutes per algorithm and we must decide whether we want that to be done every merge...

Either way, all building blocks are here now :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After discussion --> this download only occurs once, so I think it is solved?

@@ -152,6 +156,83 @@ def test_bounds(bound_input, eng):
assert passf, f"Fit still passes when initial guess f is out of fit bounds; potentially initial guesses not respected for: {name}"
assert passDp, f"Fit still passes when initial guess Ds is out of fit bounds; potentially initial guesses not respected for: {name}" '''

def test_volume(algorithmlist,eng, threeddata):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar to the download comment, would it be good to tag these tests as something so people could skip the download during testing? Just a thought - maybe they deserve their own tags or none at all.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's discuss :)

assert np.allclose(fit_result['D'], fit_result2['D'], atol=1e-4), "Results differ between parallel and serial"
assert np.allclose(fit_result['f'], fit_result2['f'], atol=1e-2), "Results differ between parallel and serial"
assert np.allclose(fit_result['Dp'], fit_result2['Dp'], atol=1e-2), "Results differ between parallel and serial"
if time_parallel * 1.3 > time_serial:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Time can be tricky because it can vary based on system load. Probably a good choice that's it's not a requirement but this message probably won't ever get seen. I wonder if there's some report we could write to - perhaps in the future - for algorithm profiling purposes?

Copy link
Collaborator Author

@oliverchampion oliverchampion Aug 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea! Let's discuss. I did give it a personalised tag, so it is findable. However, it does not appear on the final report at the end of testing.

I've now tried to:

  • turn off all warnings
  • turn back on my specific warning
  • summarize warnings with -r w

Let's see what it does. This may be undesirable as it suppresses all other warnings. But it will highlight the specific warnings we want

@oliverchampion
Copy link
Collaborator Author

I've reduced data size and now tests should go much faster!

Copy link
Contributor

@etpeterson etpeterson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me

@@ -82,11 +83,17 @@ def ivim_fit(self, bvalues, signal):
# print(Dp_prime)

if np.any(np.asarray(Dp_prime) < 0) or not np.all(np.isfinite(Dp_prime)):
print('Perfusion fit failed')
warnings.warn('Perfusion fit failed',

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This applies to all the edited files in src/original. I don't think we should touch these. If there's something wrong, it should be on the authors to fix. Or we just accept it as it is and if it is flawed, then it should fail the test in question.

I also don't quite get why this was needed here. If the b-value inputs are not correct, then it should be checked in the wrapper (something I have wanted to do for a while but never got around to)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I changed this as the print kept giving output into my prompt. By handling it as a warning, I can suppress the output from my testing reports. They otherwise got very long and hard to find the info I was looking for.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed that we should normally not touch them! @etpeterson, can you accept those changes to your source file?


fit_results = self.IAR_algorithm.fit(signals)
b0_index = np.where(bvalues == 0)[0][0]
mask = signals[...,b0_index]>0.01
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

0.01 does in some way assume that the signal has been normalized to 1, right? Would it be enough with ">0"?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This applies to src/standardized/OGC_AmsterdamUMC_Bayesian_biexp.py as well

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, the mask was added as it took shorter to fit and I think both approaches were not happy with empty voxels.
For Bayesian approach, actually, one would also want to remove noisy background voxels, as you dont want them to contribute to the data-driven prior... But that is a different story for which I don't think a general fix is possible

fit_results = self.IAR_algorithm.fit(signals)
b0_index = np.where(bvalues == 0)[0][0]
mask = signals[...,b0_index]>0.01
fit_results = self.IAR_algorithm.fit(signals, mask=mask)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is mask a valid kwarg?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes! Otherwise the testing would fail :)

"TCML_TechnionIIT_lsqlm",
"TCML_TechnionIIT_lsqtrf",
"TCML_TechnionIIT_lsq_sls_lm",
"TCML_TechnionIIT_SLS",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two algorithms from TCML missing (BOBYQA and lsq_sls_trf)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, as far as I remember, they are not properly implemented in the source code.

assert np.shape(fit_result['D'])[2] == np.shape(data)[2]
# check if right variable is in right place
assert np.nanmean(fit_result['D']) < np.nanmean(fit_result['Dp'])
assert np.nanmean(fit_result['Dp']) < np.nanmean(fit_result['f'])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this always be true?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the test data I provide, this should be true. This is predominantly testing whether we pass on the right output index to the right parameters.

also added some documentation to  base
@oliverchampion
Copy link
Collaborator Author

@IvanARashid @oscarjalnefjord this should now be fixed.
Ivan, I also added documentation to the main functions in OSIPI_base (Sorry, probably should have made a new PR, but I missed the documentation in volume fitting, and from there it propogated to single voxel and OSIPI base).
POssibly we should add this for more functions...

Copy link
Collaborator

@oscarjalnefjord oscarjalnefjord left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks fine to me

@oscarjalnefjord oscarjalnefjord merged commit 57b2aa1 into main Sep 17, 2025
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants