Dear All,
a few weeks ago we briefly spoke about the use of AI in morphometrics.
In passing, we mentioned the problems with the environment, which are
serious and will grow bigger. Of course, there are also serious ethical
issues with the use of these tools.

As I was doing my weekly browsing of the recent 'non-morphometric'
literature, I spotted this in Current Biology:
https://www.cell.com/current-biology/abstract/S0960-9822(26)00364-7
https://doi.org/10.1016/j.cub.2026.03.044
The article raises a number of important issues. I was particularly struck
by a very recent example of misuse of AI (a brief excerpt from the article
is below and more on that is easy to find in a recent paper by Rutger
Bregman in The Guardian).

I am sure many of you are better informed and already knew about this. I
did not.
Personally, I happily canceled my free account to that chat-bot and will go
on with a, hopefully, moderate use of alternatives.
All the best

Andrea

"... As Avner Gvaryahu, a former Director of the Israeli human rights
organisation Breaking the Silence and now a researcher at the University of
Oxford, has explained in a recent essay, the US and Israel were able to
strike 1,000 targets during the first 24 hours of their attack because they
“relied on AI systems to generate, prioritise and rank the target list at a
speed no human team could replicate” (https://tinyurl.com/9f25bs38).
One of those targets was the elementary school at Minab, where around 170
children died. Although the role of AI in this strike has not been
confirmed officially, Gvaryahu writes that the building became a school ten
years ago and that the infrastructure in which the target selection
programmes operate has no reliable way of spotting this kind of error.
... Gvaryahu calls for legislators especially in Europe to regulate these
companies as military contractors, rather than as technology providers for
whom the military is just another customer. This issue came to the fore
very briefly on February 27, when Anthropic asked to limit the use of its
products b the Pentagon in two cases, specifically ruling out mass
surveillance in the US and the deployment of weapons that kill without
human oversight. The Trump administration responded by blacklisting
Anthropic as a national security risk, while the competitor OpenAI, maker
of ChatGPT, took over the Pentagon contract.
In response to that, a movement called QuitGPT has called on users of the
AI programme ChatGPT to cancel their subscriptions in an effort to avoid
funding automated killing sprees in the Middle East."

-- 
You received this message because you are subscribed to the Google Groups 
"Morphmet" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/morphmet2/CAJ__j7PZbAD13ZZeE5KSpN%2Bg6y_uwKmcFwua8UnQYNnOJuBH0w%40mail.gmail.com.

Reply via email to