Hello.
> > [...]
>
> >> > I don't think that CM development should be focused on performance
> >> > improvements that are so sensitive to the actual hardware (if it's indeed
> >> > the varying amount of CPU cache that is responsible for this
> >> > discrepancy).
> >> >
> >> That would apparently
2011/10/15 Gilles Sadowski :
> Hi.
>
>> first of all, I was the author of this very usefull statement on
>> factories... Very constructive indeed.
>
> Liking something or not is an impression that could well be justified
> afterwards. It also pushes to look for arguments that ascertain the
> feelin
On 10/15/11 8:46 AM, Phil Steitz wrote:
> On 10/15/11 5:41 AM, Gilles Sadowski wrote:
>> Hi.
>>
>>> first of all, I was the author of this very usefull statement on
>>> factories... Very constructive indeed.
>> Liking something or not is an impression that could well be justified
>> afterwards. It
On 10/15/11 5:41 AM, Gilles Sadowski wrote:
> Hi.
>
>> first of all, I was the author of this very usefull statement on
>> factories... Very constructive indeed.
> Liking something or not is an impression that could well be justified
> afterwards. It also pushes to look for arguments that ascertain
Hi.
> first of all, I was the author of this very usefull statement on
> factories... Very constructive indeed.
Liking something or not is an impression that could well be justified
afterwards. It also pushes to look for arguments that ascertain the
feeling. ;-)
> >
> > However it also shows th
Hello,
first of all, I was the author of this very usefull statement on
factories... Very constructive indeed.
>
> However it also shows that the improvement is only ~13% instead of the ~30%
> reported by the benchmark in the paper...
>
could it be that their "naive" implementation as a 2D array is
> [...]
>
> I think that there was an important remark in the paper referred to in this
> thread (2nd paragraph, page 10) saying (IIUC) that changing the storage
> layout, from 2D to 1D, effectively led to a speed improvement *only* for
> matrices of sizes larger than 1010. Which leads me to thin
Hello.
> > [...]
> >
> > I might have missed the goal of your proposal but I think that main point
> > of the discussion had been about having a separate class for operations. I
> > don't recall that a new implementation ("SymmetricMatrix") with a
> > specifically optimized storage was not approve
On Fri, Oct 14, 2011 at 4:12 PM, Gilles Sadowski <
gil...@harfang.homelinux.org> wrote:
> Hello.
>
> >
> > I believe that the BlockMatrix approach you have taken is actually quite
> > good and might yield a bit more stable optimization improvement than
> simple
> > loop unrolling. Consider operati
Hello.
>
> I believe that the BlockMatrix approach you have taken is actually quite
> good and might yield a bit more stable optimization improvement than simple
> loop unrolling. Consider operations on non-overlapping blocks. You could
> easily dispatch those in parallel. I agree that it is a bi
On Fri, Oct 14, 2011 at 1:28 PM, Luc Maisonobe wrote:
> Le 14/10/2011 20:08, Greg Sterijevski a écrit :
>
> I looked more closely at the package and it am impressed with the breadth
>> of
>> material covered. Moreover, this package will do to finance what Mahout is
>> doing to companies like SAS
Le 14/10/2011 20:08, Greg Sterijevski a écrit :
I looked more closely at the package and it am impressed with the breadth of
material covered. Moreover, this package will do to finance what Mahout is
doing to companies like SAS and SPSS. Having spent a good part of my career
in finance, this pack
I looked more closely at the package and it am impressed with the breadth of
material covered. Moreover, this package will do to finance what Mahout is
doing to companies like SAS and SPSS. Having spent a good part of my career
in finance, this package (and others) will put a lot of small 'analytic
On 10/14/11 7:47 AM, Emmanuel Bourg wrote:
> Hi,
>
> I just saw this article, that might be of interest for some of the
> [math] devs. They claim to have found an optimization that is 1.6
> times faster than Commons Math :
>
> http://www.opengamma.com/blog/2011/10/14/maths-library-development
Than
Interesting that they are making such heavy wind of their 'innovations' like
loop unrolling. I would be interested in how much mileage they will get out
of those tricks in more complicated code. I do not intend to disparage their
approach, but maintaining code with too many cute optimizations costs
Hi,
I just saw this article, that might be of interest for some of the
[math] devs. They claim to have found an optimization that is 1.6 times
faster than Commons Math :
http://www.opengamma.com/blog/2011/10/14/maths-library-development
Emmanuel Bourg
---
16 matches
Mail list logo