Hi Mikel,
Thank you for your appreciation!

I have built a web-based GUI for the basic functionality of the toolkit.
I have added following features.
1. Users can add the new test cases.
2. 'View Records' show the status of the test sentences, which of them has
been
evaluated and for which hint level.
3. Results page contains analysis of evaluated test cases. For each case,
it contains,
table of masked words and the corresponding user guesses, hint level,
accuracy, and other details.

I have hosted it on pythonanywhere.com, you may have a look at the GUI at
https://niks.pythonanywhere.com/guitookitcodingchallenge/default/index

The 'Site Building' option can be accessed with the administrator password
'toolkit'
(without the quotes). Main files are, (index, clozetest, thankyou,
viewresults,
addsentences, viewRecords) and their corrospodning functions in default.py,
and
db.py(for databases) specify the databases. I have built it using open
source 'web2py python framework'.

The code has also been hosted on github.com,
https://github.com/binayneekhra/guitoolkitcodingchallenge

I have few questions/observations:

1. I think that some options should be given only to privileged users
   (e.g. adding a new test case, viewing results, modifying the database
entries.) Am I correct ?

2. "make it possible to randomly offer different hints to a user"
    Does this reduce the purpose of the Assimilation Evaluation?
    If we give hints to the users, it may be difficult to know how helpful
MT systems are,
   or comparing them. If we should show the hints, should they be synonyms
of the right
   answer or something else. Please correct me, if I am wrong.

3.  What is 'controlling the length of a user session?
  Do we put a time limit, for the user to complete the evaluation?

4. For a given test case, there are 4 level of hints(or more if we are
comparing the
different MT systems). Should a test be allowed to repeat to another user
(e.g. a given
sentence pair and hint 'No Hint' and other parameter(% of masked words), if
a test has
been performed on this, should the same test be given to another user ?) My
opinion is
when all the possible tests are performed, we can repeat the tests.

5. In my opinion, the end-users and toolkit admins are one of the most
important resources
we have. So the usability should be given considerable weightage.I am
thinking of adding
visual results about the test performed (e.g. bar chart, pie chart), and
side by side charts
for the MT systems compared.

6. If we mask the words based on POS tags. Would it be a good idea to
integrate the
POS tagger component of Apertium for this task ? or POS tags for reference
translation
can be given at the time of adding new test cases, for perhaps better
accuracy and avoiding
the load of POS tagger module.

I am working on my proposal, and will be able to submit it soon.
If possible, your feedback regarding the toolkit and how should I proceed
will help me to set my goal properly.

I posted a few questions and observations in the first mail of this thread.
If you haven't seen
them already, I will appreciate your comments on them. Pardon me, if you
have gone through them.

cc: Francis Tyres

Thank you for your time,
Binay Neekhra
irc - niks




On Sat, Mar 15, 2014 at 2:37 PM, Mikel Forcada <[email protected]> wrote:

>  Thanks a million Binay!
>
> First, you can call me just Mikel.
>
> This is a very nice piece of software. It works, and I think the idea
> would to go on (Fran and Jim may give suggestions) and
>
>
>    1. turn it into a GUI (I would go for something like a web-based GUI)
>    2. make it possible for the user to fill the holes inside the sentence
>    and move around with mouse or cursor to correct or change
>    3. time how long it takes for a user to fill the holes
>    4. make it possible to randomly offer different hints to a user
>    5. controlling the length of a user session
>     6. making it possible to compare translations from, e.g. Google
>     7. make the program completely configurable (for instance, the
>    percentage of holes, etc.)
>    8. etc...
>
> You have a bunch of ideas to write a nice proposal!
>
> All the best
>  Mikel
>
> Al 03/15/2014 06:15 AM, En/na Binay Neekhra ha escrit:
>
>   Hi Prof. Forcada,
>  I apologize for inconvenience caused to you. I realized that there are
> better ways.
>
>  Please find the attached .zip file which contains files related to
> coding challenge.
>
> I have also hosted it github. Here is the link
>
> https://github.com/binayneekhra/coding-challege-for-Assimilation-toolkit.git
>
>  Thanks,
> -Binay
>
>
> --
> Mikel L. Forcada (http://www.dlsi.ua.es/~mlf/)
> Departament de Llenguatges i Sistemes InformĂ tics
> Universitat d'Alacant
> E-03071 Alacant, Spain
> Phone: +34 96 590 9776
> Fax: +34 96 590 9326
>
>
>
> ------------------------------------------------------------------------------
> Learn Graph Databases - Download FREE O'Reilly Book
> "Graph Databases" is the definitive new guide to graph databases and their
> applications. Written by three acclaimed leaders in the field,
> this first edition is now available. Download your free book today!
> http://p.sf.net/sfu/13534_NeoTech
> _______________________________________________
> Apertium-stuff mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/apertium-stuff
>
>
------------------------------------------------------------------------------
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech
_______________________________________________
Apertium-stuff mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/apertium-stuff

Reply via email to