Here's an obligatory plug for the two following PRs:
https://github.com/numpy/numpy/pull/5457
https://github.com/numpy/numpy/pull/5470
Regards
Antoine.
On Fri, 6 May 2016 15:01:32 +0200
Francesc Alted wrote:
> 2016-05-05 22:10 GMT+02:00 Øystein Schønning-Johansen :
>
> > Thanks for your answ
note that anything larger than 16 bytes alignment is unnecessary for
simd purposes on current hardware (>= haswell). 16 byte is default
malloc alignment on amd64.
And even on older ones (sandy bridge) the penalty is pretty minor.
On 05.05.2016 22:32, Charles R Harris wrote:
On Thu, May 5, 20
2016-05-05 22:10 GMT+02:00 Øystein Schønning-Johansen :
> Thanks for your answer, Francesc. Knowing that there is no numpy solution
> saves the work of searching for this. I've not tried the solution described
> at SO, but it looks like a real performance killer. I'll rather try to
> override mall
On Thu, May 5, 2016 at 2:10 PM, Øystein Schønning-Johansen <
oyste...@gmail.com> wrote:
> Thanks for your answer, Francesc. Knowing that there is no numpy solution
> saves the work of searching for this. I've not tried the solution described
> at SO, but it looks like a real performance killer. I'
Thanks for your answer, Francesc. Knowing that there is no numpy solution
saves the work of searching for this. I've not tried the solution described
at SO, but it looks like a real performance killer. I'll rather try to
override malloc with glibs malloc_hooks or LD_PRELOAD tricks. Do you think
tha
2016-05-05 11:38 GMT+02:00 Øystein Schønning-Johansen :
> Hi!
>
> I've written a little code of numpy code that does a neural network
> feedforward calculation:
>
> def feedforward(self,x):
> for activation, w, b in zip( self.activations, self.weights,
> self.biases ):
> x
Hi!
I've written a little code of numpy code that does a neural network
feedforward calculation:
def feedforward(self,x):
for activation, w, b in zip( self.activations, self.weights,
self.biases ):
x = activation( np.dot(w, x) + b)
This works fine when my activation funct