todo update
This commit is contained in:
parent
0c9454cdd4
commit
25fd67865d
9
main.py
9
main.py
|
@ -13,16 +13,13 @@ from gfun.generalizedFunnelling import GeneralizedFunnelling
|
||||||
|
|
||||||
"""
|
"""
|
||||||
TODO:
|
TODO:
|
||||||
|
- [!] logging
|
||||||
- add documentations sphinx
|
- add documentations sphinx
|
||||||
- zero-shot setup
|
- [!] zero-shot setup
|
||||||
- load pre-trained VGFs while retaining ability to train new ones (self.fitted = True in loaded? or smt like that)
|
|
||||||
- test split in MultiNews dataset
|
|
||||||
- when we load a model and change its config (eg change the agg func, re-train meta), we should store this model as a new one (save it)
|
|
||||||
- FFNN posterior-probabilities' dependent
|
- FFNN posterior-probabilities' dependent
|
||||||
- re-init langs when loading VGFs?
|
- re-init langs when loading VGFs?
|
||||||
- there is a mess about sigmoid in the Attention aggregator + and evaluation function (predict). We were applying sig() 2 times on the outputs (at pred and at eval)...
|
|
||||||
- [!] loss of Attention-aggregator seems to be uncorrelated with Macro-F1 on the validation set!
|
- [!] loss of Attention-aggregator seems to be uncorrelated with Macro-F1 on the validation set!
|
||||||
- aligner layer (suggestion by G.Puccetti)
|
- [!] experiment with weight init of Attention-aggregator
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue