-
Notifications
You must be signed in to change notification settings - Fork 0
Nightly dev to Dev #3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Conversation
models/evaluator.py
Outdated
| from nltk.translate.bleu_score import sentence_bleu | ||
|
|
||
|
|
||
| def evaluation(translations, references): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not just use, gold and prediction here instead of translations and references variables?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
translations refers to translated sentence and references refers to source sentences,can you more explain about gold and prediction?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so based on your message, prediction is translations and gold is references (for gold naming you can use truth) too
models/evaluator.py
Outdated
| print('Individual 4-gram: %f' % sentence_bleu(refer_token, trans_token, weights=(0, 0, 0, 1))) | ||
|
|
||
|
|
||
| # reference = [['i love cats'], ['i love hats']] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can put this sample input and outputs inside of the evaluator function!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# reference = [['i love cats'], ['i love hats']] ..... which you are using, please put them inside of the evaluator function docstring.
models/evaluator.py
Outdated
| from nltk.translate.bleu_score import sentence_bleu | ||
|
|
||
|
|
||
| def evaluation(translations, references): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is the name of this evaluation, you can use its name here, for example, bleu_evaluation not only simple evaluation naming
models/evaluator.py
Outdated
| Returns: | ||
| nothings for now, just print BLEU scores | ||
| """ | ||
| print('translations=[%s], references=%s' % (translations, references)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please put a verbose variable to whatever to do a print or not! printing all of those lists could be tricky
for example
def evaluation(...,..., verbose=False):
if verbose:
print("..............")
...
...
models/evaluator.py
Outdated
| for sen in references: | ||
| refer_token.append(sen[0].split()) | ||
|
|
||
| print('Individual 1-gram: %f' % sentence_bleu(refer_token, trans_token, weights=(1, 0, 0, 0))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Individual 1-gram what score??? is this BLEU so put its in print function!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
print('BLEU, Individual 1-gram: %f' % sentence_bleu(refer_token, trans_token, weights=(1, 0, 0, 0)))
models/model_utils.py
Outdated
| from preprocessing import create_dataset | ||
| from sklearn.model_selection import train_test_split | ||
|
|
||
| path_to_file = '/media/moslem/Private/Project/MTP/Machine-Translation/dataset/manythings/pes.txt' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it a valid path to file in this repository? and is it required to pass here?
models/model_utils.py
Outdated
| convert(targ_lang_tokenizer, target_tensor_train[0]) | ||
|
|
||
|
|
||
| main_proccess() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove this line
|
put documentation into all of your methods too! I saw a few of them don't have it! |
models/evaluator.py
Outdated
| candidate = 'i love cat' | ||
| bleu_evaluation(candidate, truth) | ||
| """ | ||
| print('translations=[%s], references=%s' % (prediction, truth)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you should add verbose to this print, not to the other ones, since the amount of prediction and truth variables could be more than just one sentence it can be annoying to print all of them
models/evaluator.py
Outdated
| for sen in prediction: | ||
| truth_token.append(sen[0].split()) | ||
|
|
||
| if verbose: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no need verbose for this prints
|
dear @HamedBabaei i read your comments and push the new version, Thanks |
dear @HamedBabaei please check this merge request,sorry for delay.