Skip to content

Conversation

JenniferHem
Copy link
Collaborator

Issue:

Currently a pl.Trainer object can be initialized via from lightning import pytorch as pl, but also via import pytorch_lightning as pl. We use methods to get and set params in Chemprop. Unfortunately a Trainer Object is newly initialized upon calling set or update params. However, at this time an Accelerator is already instatiated, which leads to an issue as lightning requires a string. The "get device" function detects via isinstance wether a CPU or GPU accelerator was chosen and will transform this back into a string. The isinstance Method however only works for the lighning import but fails if pytorch_lighning is used.

Solution:

To prevent adding pytorch_lighning as a dependency we now enhanced the validation. If a user does not use lightning (but used pytorch_lighning instead) a corresponding value Error will now be raised.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants