On 12/18/22 19:01, Boris Steipe wrote:
Technically not a help question. But crucial to be aware of, especially for 
those of us in academia, or otherwise teaching R. I am not aware of a suitable 
alternate forum. If this does not interest you, please simply ignore - I 
already know that this may be somewhat OT.

Thanks.
------------------------------------------------------

You very likely have heard of ChatGPT, the conversation interface on top of the 
GPT-3 large language model and that it can generate code. I thought it doesn't 
do R - I was wrong. Here is a little experiment:
Note that the strategy is quite different (e.g using %in%, not duplicated() ), the 
interpretation of "last variable" is technically correct but not what I had in 
mind (ChatGPT got that right though).


Changing my prompts slightly resulted it going for a dplyr solution instead, 
complete with %>% idioms etc ... again, syntactically correct but not giving me 
the fully correct results.

------------------------------------------------------

Bottom line: The AI's ability to translate natural language instructions into code is 
astounding. Errors the AI makes are subtle and probably not easy to fix if you don't 
already know what you are doing. But the way that this can be "confidently 
incorrect" and plausible makes it nearly impossible to detect unless you actually 
run the code (you may have noticed that when you read the code).

Will our students use it? Absolutely.

Will they successfully cheat with it? That depends on the assignment. We 
probably need to _encourage_ them to use it rather than sanction - but require 
them to attribute the AI, document prompts, and identify their own, additional 
contributions.

Will it help them learn? When you are aware of the issues, it may be quite 
useful. It may be especially useful to teach them to specify their code 
carefully and completely, and to ask questions in the right way. Test cases are 
crucial.

How will it affect what we do as instructors? I don't know. Really.

And the future? I am not pleased to extrapolate to a job market in which they 
compete with knowledge workers who work 24/7 without benefits, vacation pay, or 
even a salary. They'll need to rethink the value of their investment in an 
academic education. We'll need to rethink what we do to provide value above and 
beyond what AI's can do. (Nb. all of the arguments I hear about why humans will 
always be better etc. are easily debunked, but that's even more OT :-)

--------------------------------------------------------

If you have thoughts to share how your institution is thinking about academic 
integrity in this situation, or creative ideas how to integrate this into 
teaching, I'd love to hear from you.

*NEVER* let the AI misleading the students! ChatGPT gives you seemingly
sound but actually *wrong* code!

ChatGPT never understands the formal abstraction behind the code, it
just understands the shallow text pattern (and the syntax rules) in the
code. And it often gives you the code that seemingly correct but indeed
wrongly output. If it is used with code completion, then it is okay
(just like github copilot), since the coder need to modify the code
after getting the completion. But if you want to use ChatGPT for
students to query information / writing code, it is error proning!

______________________________________________
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to