Unsafe use of eval
In modeling_transnormer.py and utils.py, eval
is used to parse environment variables.
Instead of:
some_option = eval(os.environ.get("some_option", default="False"))
I would recommend using something like:
some_option = os.environ.get("some_option", default="False").lower() in ["true", "yes", "y", "1"]
Also, do_eval
in particular is evaluated on every forward
call for each NormLinearAttention
layer. Is there a particular reason for this, or should it instead be a global?
They are just poor codes, there should be a pr to improve these code.
The reason for eval in every forward is probably bc the authors has some testing code that can be made easy to switch between evaluation and training for their own convinence.
It shouldn't be this way though. In practice, we should replace the do_eval assignment with checks derived from user's previous call to pytorch model.eval(), ie. the model.training bool https://discuss.pytorch.org/t/check-if-model-is-eval-or-train/9395
Hello, thank you for your suggestion. We will optimize the code in the future. The "do_eval" is related to attention calculation, and we will also update this in the future.