I have measured it with my client (napalmex from gsmtk) and it was really only 
10 kfrag/s.

Regenerated kernel to use only 4x32bit vectors (genkernel32).

Now it computes 2 bursts per second with 60s lag (could be probably 
fine-tuned), which is ~expected (we have 3 per second on 3x7970). (and now the 
connection died)

On 23.2.2016 12:16, Milinko Isakovic wrote:
> Jan thank you for assistance, in setup and,
> 
> 
> Currently we are facing issue with deka setup. so far we ware able to crack 
> keys with HD7900 series GPU. but issue is time required to crack one burst or 
> 100 bursts is approx 3 min
> how can we reduce this time.
> 
> just for example if 1 gpu is used per burst then cant we use 4 GPU per burst  
> to decrease lookup time. also currently I have 2 GPU with me but time taken 
> to crack is same.
> here is my procedure
> start paplon.py
> start oclvankus.py
> choose one gpu per instance and run 2 instance.
> start one instance of delta_client.py
> start cracking
> 
> now here are some example result of my crack
> 
> crack #0 took 110463 msec
> crack #5 took 111005 msec
> crack #2 took 113481 msec
> crack #6 took 114020 msec
> crack #18 took 115290 msec
> crack #7 took 116799 msec
> crack #9 took 117100 msec
> crack #3 took 118306 msec
> crack #4 took 118606 msec
> crack #15 took 118906 msec
> crack #13 took 119809 msec
> crack #8 took 120348 msec
> crack #16 took 120649 msec
> crack #10 took 123359 msec
> crack #19 took 124863 msec
> crack #11 took 126129 msec
> crack #14 took 127872 msec
> crack #17 took 130940 msec
> crack #12 took 137195 msec
> 
> if you can see the time is increasing as it moves with next burst.
> 
> 
> i also checked in  bursts.samplesession.tar.gz it start with 45000ms and ends 
> with 65000 ms
> 
> now i will result of one gpu instance
> 
> crack #5 took 135398 msec
> crack #0 took 135700 msec
> crack #2 took 138767 msec
> crack #15 took 140602 msec
> crack #9 took 140905 msec
> crack #6 took 141208 msec
> crack #3 took 144273 msec
> crack #4 took 144574 msec
> crack #7 took 144874 msec
> crack #16 took 146093 msec
> crack #13 took 146395 msec
> crack #8 took 147616 msec
> crack #21 took 150978 msec
> crack #14 took 152812 msec
> crack #10 took 153114 msec
> crack #19 took 153413 msec
> 
> there is very less difference in it
> 
> 
> it will be good if one please share actual steps to take while  using multi 
> gpu.
> 
> In case of  adding for example 14 GPU, and speed will have point only in 
> rising burst in parallel: for example from 200 to 1000, or we cracking speed 
> for 200 burst in parallel will be decreasing, and how much it will decrease?
> 
> Also I am wondering, with CPU rising what will happed?  In case I use 
> multiprocessor MB (4cpu or 8cpu), will speed for one burst will go down, from 
> current 20 second to 5-8 second  ?  Or only will be able to fish 8 bursts in 
> 20 seconds ? 
> 
> We are  trying to to get speed of 1 second for each burst... Is this possible 
> anyhow with DEKA ?  On what part we need to focus ?
> 
> Later we will have questions regarding cluster, but firstly we would like to 
> get maximum from one computer...
> 
> 
> 
> 
> On Sun, Feb 21, 2016 at 3:03 PM, Jan Hrach <[email protected] 
> <mailto:[email protected]>> wrote:
> 
>     I think it looks OK now. I have raised QSIZE to 100. Looks like the queue 
> management in Deka sucks :-/
> 
>     PS: your cracker did not find "21D142736909C3A5 @ 35 #1 (table:108)" from 
> https://brmlab.cz/project/gsm/deka/deka-test (crack.txt in the linked 
> tarball), the others are OK.
> 
> 
>     On 21.2.2016 22:51, Milinko Isakovic wrote:
>     > If you are still there /dev/sdg sould be now ok. I think it was cable 
> issue...
>     >
>     > On Sun, Feb 21, 2016 at 9:29 PM, Jan Hrach <[email protected] 
> <mailto:[email protected]> <mailto:[email protected] 
> <mailto:[email protected]>>> wrote:
>     >
>     >     (ssh'ed there, installed htop and iotop)
>     >
>     >     This is because you try to crack only one burst at once. The kernel 
> is designed in a way that it loads num_kernels * vector_size fragments at 
> once, i.e., 4096*32 = 131072 fragments (one burst is 16320 fragments) and 
> then time for one round is almost constant. So you need to crack as many 
> bursts in parallel as to keep it fully saturated. If you are cracking only 
> one burst, the CPU *is* faster. But if you submit for example 20 bursts at 
> once (I would recommend even more, say up to 100), you will make the 
> advantage of 2048 cores of your GPU.
>     >
>     >     PS: your /dev/sdg died, so currently it does not crack at all :-(
>     >
>     >
>     >     On 21.2.2016 19:05, Milinko Isakovic wrote:
>     >     > Hi to all,
>     >     >
>     >     > I tired deka, but seems to be a lot of problems. First of all, I 
> had problem to install drivers (Debian Jessie) amd64,  at the end, finaly I 
> did it somehow.
>     >     >
>     >     > Configuration is:
>     >     >
>     >     > 32 Gb ram
>     >     > 2x 7970
>     >     > 8x SAS disk
>     >     > i7 cpu
>     >     >
>     >     > Main problem for me is that it seems to me that GPU cracking is 
> not working or I do not know how to make an setup.
>     >     >
>     >     > Screenshots I am sending to you describing:
>     >     >
>     >     > 1-5.png  - GUP cracking  ---> 100+ sec
>     >     > cpu1-5.ping CPU cracking ---> 18-19 sec
>     >     >
>     >     > Does anybody know where I am making mistake ?
>     >     >
>     >     >
>     >     > Also I can give ssh to somebody if somebody would like to do 
> tests / checkl where problem is>
>     >     >
>     >     > Regadrs
>     >     > Milinko
>     >     >
>     >
>     >     --
>     >     Jan Hrach | http://jenda.hrach.eu/
>     >     GPG CD98 5440 4372 0C6D 164D A24D F019 2F8E 6527 282E
>     >
>     >
>     >
>     >
>     > --
>     >
>     > Milinko Isakovic, CEO
>     > CS Network Solutions Limited
>     >
>     > 72 Blythe Road, West Kensington, London W14 0HB
>     >
>     >
>     > London Switchboard: +442071933539
>     > Belgrade Operations: +381112620152
>     > Personal mobille:       +381666666666
>     >
>     > Web: www.cs-networks.net <http://www.cs-networks.net> 
> <http://www.cs-networks.net/>
>     > <http://www.facebook.com/smsanywhere> <https://twitter.com/cs_networks> 
> <http://www.linkedin.com/company/cs-networks>  
> <http://www.youtube.com/csnetworks>
>     > --------------------------
>     >
>     > This message (including any attachments) is confidential and may be 
> privileged. If you have received it by mistake please notify the sender by 
> return e-mail and delete this message from your system. Any unauthorized use 
> or dissemination of this message in whole or in part is strictly prohibited. 
> Please note that e-mails are susceptible to change. CS Networks shall not be 
> liable for the improper or incomplete transmission of the information 
> contained in this communication nor for any delay in its receipt or damage to 
> your system.  CS Networks does not guarantee that the integrity of this 
> communication has been maintained nor that this communication is free of 
> viruses, interceptions or interference.
> 
>     --
>     Jan Hrach | http://jenda.hrach.eu/
>     GPG CD98 5440 4372 0C6D 164D A24D F019 2F8E 6527 282E
> 
> 
> 
> 
> -- 
> 
> Milinko Isakovic, CEO
> CS Network Solutions Limited
> 
> 72 Blythe Road, West Kensington, London W14 0HB
> 
> 
> London Switchboard: +442071933539
> Belgrade Operations: +381112620152
> Personal mobille:       +381666666666
> 
> Web: www.cs-networks.net <http://www.cs-networks.net/>
> <http://www.facebook.com/smsanywhere> <https://twitter.com/cs_networks> 
> <http://www.linkedin.com/company/cs-networks>  
> <http://www.youtube.com/csnetworks>
> --------------------------
> 
> This message (including any attachments) is confidential and may be 
> privileged. If you have received it by mistake please notify the sender by 
> return e-mail and delete this message from your system. Any unauthorized use 
> or dissemination of this message in whole or in part is strictly prohibited. 
> Please note that e-mails are susceptible to change. CS Networks shall not be 
> liable for the improper or incomplete transmission of the information 
> contained in this communication nor for any delay in its receipt or damage to 
> your system.  CS Networks does not guarantee that the integrity of this 
> communication has been maintained nor that this communication is free of 
> viruses, interceptions or interference.

-- 
Jan Hrach | http://jenda.hrach.eu/
GPG CD98 5440 4372 0C6D 164D A24D F019 2F8E 6527 282E
_______________________________________________
A51 mailing list
[email protected]
https://lists.srlabs.de/cgi-bin/mailman/listinfo/a51

Reply via email to