To comment a bit on the article:

>What if a high level programing description language was developed. Note I did not say programming language. >This description language would allow you to “describe” what you needed to do and not how to do it (as >discussed before). This draft description would then be presented to an AI based clarifier which would examine >the description, look for inconsistencies or missing information and work with the programmer to create a formal >description of the problem. At that point the description is turned over to a really smart compiler that could target >a particular hardware platform and produce the needed optimized binaries.
>Perhaps a GA could be thrown in to help optimize everything.

The typical average PHD way of approaching problems they need to solve:

Suppose you need to solve a huge problem A at a major parallel machine.
We throw the problem A into a blackbox B giving output A'.

Now our fantastic new brilliant method/algorithm comes into place that works on A' what our paper is about,
and we solve the problem with that.

I see that scenario a lot in all kind of different variations.

A great example of it is:

We invent algorithm f( x )

Now in our program we have
   if( magicfunction(x) && f(x) )
      then printf("problem solved. jippie i got my research done.\n");

They test it at 1 case C and shout victory.
typical AI way of doing things. There is so little AI dudes who do more than fill 50 pages of paper a year with ideas that keep untested, and because of never testing their understanding of the problem never advances and the next guy still shows up with the same untested solution. So the above guy who actually is *doing* something and testing something, already gets cheered for (with some reasons).

But of course magicfunction(x) only works for their case C.
There is no statistic significance.

The number of AI guys who test their algorithm/method at state of the art software, be it selfwritten or from someone else, AND test is statistical significant without using some flawed testset where magicfunction(x) is always true, those guys you can
really count on 2 hands the past 30 years.

In parallel AI it is even worse in fact. There is for example just 2 chessprograms that scale well (so without losing first factor 50 to the parallel search frame) at supercomputers. There is some fuzz now about go programs using UCT/Monte Carlo and a few other algorithms combined; but these random algorithms miss such simple tactics, which for their computing power should be easy to not miss, that they really should think out a better way to search there parallel.

Each AI solution that is not embarrassingly parallel is so difficult to parallellize well, so not losing a factor 50, that it is just real hard to make generic frameworks that work real efficient. Such a framework would pose a solution for only 1 program; as soon as you improve 1 year later the program with a new enhancement the entire parallel framework might need to get rewritten from scratch again, to not again lose some factors in speed in scaling.

The commercial way most AI guys look to the parallel problem is therefore real simple: "can i get faster than i am at a PC, as it is a government machine anyway, i didn't pay for efficient usage of it, i just want to be faster than my home PC
    without too much effort".

I'm not gonna fight that lemma.

However a generic framework that works like that is not gonna get used of course. It just speeds you up so much to make a custom
parallel solution, that everyone is doing it.

Besides, the hardware is that expensive, that it's worth doing it.

>This process sounds like it would take a lot of computing resources. Guess what? We have that.

>Why not throw a cluster at this problem. Maybe it would take a week to create a binary, >but it would be cluster time and not your time. There would be no edit/make/run cycle >because the description tells the compiler what the program has to do. The minutia >(or opportunities for bugs) of programming whether it be serial or parallel would be
>handled by the compiler. Talk about a killer application.




On Oct 2, 2008, at 11:05 AM, John Hearns wrote:

I just read Douglas Eadline's article on Linux Magazine, entitled "What He Said"
http://www.linux-mag.com/id/7087

Very thought provoking article, and took me back to thinking about genetic algorithms, a subject I flirted with 20 years ago. I didn't find it worthwhile on a Sparc1 system with a whopping 2 Mbytes of RAM.

I guess I should encourage responses to be made on the Linux Mag site.

John Hearns
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf


_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to