There would be much to say on those topics, but this is not the appropriate
venue. If it was, I'd start noticing that technology has brought wealth and
improvements, but very unequally and, as a side effect, has become an
exponential multiplier of anthropogenic impacts with hardly a single
indicator suggesting that it's bringing us solutions to the global
environmental crisis (CO2 keeps rising, ice caps are melting, deforestation
goes on, extinctions are on the rise - but we can still watch the lions
chasing zebras on the TV! - etc.).

None of those who wrote the papers I referenced is against AI by default.
Yet, sticking to the specific example, a fact and not a hysterical
preoccupation about an unforeseen future, some are not happy that AI
decides who to bomb and, in the process, a private company makes money in a
way that (I hope most) users of its services would not be happy about, if
aware. As now I am aware, I made my decision.

All the best

Andrea

On Mon, 13 Apr 2026 at 11:32, Mauro Cavalcanti <[email protected]> wrote:

> The current "AI scare" reminds me of legislators afraid of allowing the
> Ford Model T in the streets of New York (and other cities) because that
> "abominable machine" would made chariots obsolete. "Ethical considerations"
> probably entered those discussions.
>
> And who does not remember the time, around the 1970's, in which school
> teachers were afraid of pocket calculators because the children would lose
> their abilities of doing simple arithmetics?
>
> And what about television, which was heralded as the destroyer of all
> culture, and a "mesmerizing machine", etc.? Some of the same arguments of
> almost a century ago concerning TV's seem now to be resurfacing (obviously
> in a more colorful way) with the current discussions about smartphones.
>
> Of course, many other such examples are available. Humankind seems to have
> a strange primal propensity to, at first, be afraid of its own
> technological creations.
>
> Cheers,
>
> -
> Dr. Mauro J. Cavalcanti
> E-mail: [email protected]
> GitHub: https://github.com/maurobio
> ORCID: https://orcid.org/0000-0003-2389-1902
> "Life is complex. It consists of real and imaginary parts."
>
>
>
> Em seg, 13 de abr de 2026 04:52, alcardini <[email protected]> escreveu:
>
>> Dear All,
>> a few weeks ago we briefly spoke about the use of AI in morphometrics.
>> In passing, we mentioned the problems with the environment, which are
>> serious and will grow bigger. Of course, there are also serious ethical
>> issues with the use of these tools.
>>
>> As I was doing my weekly browsing of the recent 'non-morphometric'
>> literature, I spotted this in Current Biology:
>> https://www.cell.com/current-biology/abstract/S0960-9822(26)00364-7
>> https://doi.org/10.1016/j.cub.2026.03.044
>> The article raises a number of important issues. I was particularly
>> struck by a very recent example of misuse of AI (a brief excerpt from the
>> article is below and more on that is easy to find in a recent paper
>> by Rutger Bregman in The Guardian).
>>
>> I am sure many of you are better informed and already knew about this. I
>> did not.
>> Personally, I happily canceled my free account to that chat-bot and will
>> go on with a, hopefully, moderate use of alternatives.
>> All the best
>>
>> Andrea
>>
>> "... As Avner Gvaryahu, a former Director of the Israeli human rights
>> organisation Breaking the Silence and now a researcher at the University of
>> Oxford, has explained in a recent essay, the US and Israel were able to
>> strike 1,000 targets during the first 24 hours of their attack because they
>> “relied on AI systems to generate, prioritise and rank the target list at a
>> speed no human team could replicate” (https://tinyurl.com/9f25bs38).
>> One of those targets was the elementary school at Minab, where around 170
>> children died. Although the role of AI in this strike has not been
>> confirmed officially, Gvaryahu writes that the building became a school ten
>> years ago and that the infrastructure in which the target selection
>> programmes operate has no reliable way of spotting this kind of error.
>> ... Gvaryahu calls for legislators especially in Europe to regulate these
>> companies as military contractors, rather than as technology providers for
>> whom the military is just another customer. This issue came to the fore
>> very briefly on February 27, when Anthropic asked to limit the use of its
>> products b the Pentagon in two cases, specifically ruling out mass
>> surveillance in the US and the deployment of weapons that kill without
>> human oversight. The Trump administration responded by blacklisting
>> Anthropic as a national security risk, while the competitor OpenAI, maker
>> of ChatGPT, took over the Pentagon contract.
>> In response to that, a movement called QuitGPT has called on users of the
>> AI programme ChatGPT to cancel their subscriptions in an effort to avoid
>> funding automated killing sprees in the Middle East."
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Morphmet" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [email protected].
>> To view this discussion visit
>> https://groups.google.com/d/msgid/morphmet2/CAJ__j7PZbAD13ZZeE5KSpN%2Bg6y_uwKmcFwua8UnQYNnOJuBH0w%40mail.gmail.com
>> <https://groups.google.com/d/msgid/morphmet2/CAJ__j7PZbAD13ZZeE5KSpN%2Bg6y_uwKmcFwua8UnQYNnOJuBH0w%40mail.gmail.com?utm_medium=email&utm_source=footer>
>> .
>>
>

-- 
E-mail address: [email protected], [email protected]
WEBPAGE: https://sites.google.com/view/alcardini2/
or https://tinyurl.com/andreacardini

-- 
You received this message because you are subscribed to the Google Groups 
"Morphmet" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/morphmet2/CAJ__j7M3b%2B1VBOGXOJPsaj4KhBjx2NzBu6e50tm3Rz0RuJXqqQ%40mail.gmail.com.

Reply via email to