I actualy came accross a setting in my device manager called write cache
buffer flushing. When you disable Write Cache Buffer Flushing, this allows
application software to blaze ahead after writing data to disk without
waiting for the physical write to complete.
http://noel.prodigitalsoftware.com
Maxim Dounin Wrote:
---
> Hello!
>
> On Sat, Jun 28, 2014 at 12:38:12AM -0400, c0nw0nk wrote:
>
> > Latest picture
> > http://s633.photobucket.com/user/C0nw0nk/media/Untitled-7.png.html
> >
> > Everything utilizing the read's and writes is ngin
Hello!
On Sat, Jun 28, 2014 at 12:38:12AM -0400, c0nw0nk wrote:
> Latest picture
> http://s633.photobucket.com/user/C0nw0nk/media/Untitled-7.png.html
>
> Everything utilizing the read's and writes is nginx and when i set the
> following buffers i get massive spikes like that.
>
> location ~ \.m
I ran a couple more tests with 2000mb then 4000mb
---
CrystalDiskMark 3.0.3 x64 (C) 2007-2013 hiyohiyo
Crystal Dew World : http://crystalmark.info/
After benchmarking this was my output. I have no idea if it is good or bad
or what, i am rather hoping someone with more understanding of I/O usage and
if i have hit my max or not can tell me.
This is the version of crystal mark i benchmarked with
http://crystalmark.info/redirect.php?product=Cryst
Latest picture
http://s633.photobucket.com/user/C0nw0nk/media/Untitled-7.png.html
Everything utilizing the read's and writes is nginx and when i set the
following buffers i get massive spikes like that.
location ~ \.mp4$ {
mp4;
mp4_buffer_size 9000m;
mp4_max_buffer_size 9000m;
}
Posted at Nginx
Paul Schlie Wrote:
---
> I don't know if what you're experiencing is related to a problem I'm
> still tracking down, specifically that multiple redundant read-streams
> and corresponding temp_files are being opened to read the same file
> from a b
I don't know if what you're experiencing is related to a problem I'm still
tracking down, specifically that multiple redundant read-streams and
corresponding temp_files are being opened to read the same file from a backend
server for what appears to be a single initial get request by a client fo
So a disk spinning at 15000 rpm compared to my current hard drive spinning
at 7000 rpm does better than a SSD still ?
This is my current hard drive i posted earlyer i do believe
http://www.hgst.com/hard-drives/enterprise-hard-drives/enterprise-sata-drives/ultrastar-7k4000
Posted at Nginx Forum:
c0nw0nk Wrote:
---
> Perhaps nginx should look at the I/O usage to do with that function
> and see if they can make it better.
Its a disk subsystem issue which is under control by the OS not nginx, a
good 15k sas does wonders.
Posted at Nginx Fo
My new soloution did not last very long everything shot up again so the mp4
function is needed to drop I/O usage but as of what the optimal setting for
the buffers are realy does baffle me
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,251186,251265#msg-251265
_
I think i found the soloution rather than buffer or envolve pseudo streaming
mp4 already html5 compatible videos.
I just leave it to the browsers rather than my server.
So to solve my I/O usage issue i dropped "mp4;" from my server config
"#mp4;" and now my I/O usage is basically back at 0.
Perh
Hmm well i have figured out it is my mp4 buffers that need fixing but i
recon my largest video file size on the server is maybe 700mb as of figuring
out what to set this to i am currently just playing around with it to see
what works best.
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,
Which shows disk IO is much better which to me indicates there were/are too
many small writes to disk, when some parts are slow tuning is a big time
issue with nginx no matter which OS your running.
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,251186,251259#msg-251259
___
The results got even more fascinating as i increased the buffer size's to
the following.
client_max_body_size 0;
client_body_buffer_size 1000m;
mp4_buffer_size 700m;
mp4_max_buffer_size 1000m;
http://s633.photobucket.com/user/C0nw0nk/media/Untitled-6.png.html
Posted at Nginx Forum:
http://forum
Try via a forum like
http://www.overclock.net/t/1193676/looking-for-hdd-benchmark-utility
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,251186,251249#msg-251249
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo
Since i have never had to benchmark a hard drive before this will be a new
experience for me any tools you recommend to use specifically.
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,251186,251247#msg-251247
___
nginx mailing list
nginx@ngi
It all depends what you are writing, too small blocksize, many seeks,
onboard diskcache not working (writeback). Run some disk benchmarks to see
what your storage is capable of and compare that to how much data your
attempting to write. At the moment your disks are not keeping up with the
amount of
So the soloution could be a different hard drive possibly a solid state
drive ? This is my current hard drive
http://www.hgst.com/hard-drives/enterprise-hard-drives/enterprise-sata-drives/ultrastar-7k4000
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,251186,251238#msg-251238
_
Looking at the disk activity access to disk is using all your resources not
nginx.
Here http://s633.photobucket.com/user/C0nw0nk/media/Untitled-5.png.html you
see nginx itself is waiting for disk IO to complete, all processes are doing
just about nothing other then waiting for the harddisk, the mai
I also just to try and check if it was my connection limit enabled
nginx_status and this was my output.
Active connections: 1032
server accepts handled requests
8335 8335 12564
Reading: 0 Writing: 197 Waiting: 835
How can i fix the I/O issue why is nginx consuming so much in the first
place
This is a disk IO issue, not running out of connections, setting 190 is
pointless, 16k is more then enough, no more then 2 workers per cpu, I see 12
workers so do you have enough cpu's to cover that?
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,251186,251210#msg-251210
__
When i said "my bandwidth output looks like its very jumpy". on a 1gig per
second connection my output jumps up and down 10% (100mb) used then it will
jump to like 40% (400mb) and it changes so much before when i had less
traffic it used to be a very stead and stable 400-500mb output and hardly
eve
Now i am clueless because i dropped keepalive requests i also dropped any
send_timeout values.
And this is what my bandwidth output looks like its very jumpy when it
should not be and my page loads are very slow even on static files like
html, mp4, flv etc and considering its nginx that delievers
c0nw0nk Wrote:
---
> Could it be possible my server slows down because all connections are
> in use ?
No, it's a recycling and auto-tuning issue as far as I can see, have you
determined at which value you noticed the difference or is this value s
Could it be possible my server slows down because all connections are in use
?
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,251186,251201#msg-251201
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Maxim Dounin Wrote:
---
> Hello!
>
> On Thu, Jun 26, 2014 at 09:41:15AM -0400, c0nw0nk wrote:
>
> > So i spent a while on this one and turns out the problem is a little
> > function in nginx's core called "worker_rlimit_nofile".
> >
> >
> > Bu
Hello!
On Thu, Jun 26, 2014 at 09:41:15AM -0400, c0nw0nk wrote:
> So i spent a while on this one and turns out the problem is a little
> function in nginx's core called "worker_rlimit_nofile".
>
>
> But for me on windows (i don't know if it does it for linux users too.)
> grinds my site down to
I don't know how you would try to replicate this issue because i have
thousands upon thousands of files being accessed simultaneously without me
setting that value insanely high pages and access to thing take 10 seconds
and more even timeouts was occurring but as soon as i set that value it all
sto
c0nw0nk Wrote:
---
> Well without a value everything is very very slow. With a value its
> nice and fast.
Interesting to know, the Windows design and other portions scale
automatically between 4 API's to deal with high performance while offloadin
I recon its because i have media sites with lots of files and pictures
videos content etc so i need it to be a large limit.
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,251186,251192#msg-251192
___
nginx mailing list
nginx@nginx.org
http://
Well without a value everything is very very slow. With a value its nice and
fast.
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,251186,251188#msg-251188
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
The way things have been redesigned, worker_rlimit_nofile has no purpose
anymore, it's best not to set any value.
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,251186,251187#msg-251187
___
nginx mailing list
nginx@nginx.org
http://mailman.ng
So i spent a while on this one and turns out the problem is a little
function in nginx's core called "worker_rlimit_nofile".
But for me on windows (i don't know if it does it for linux users too.)
grinds my site down to a halt unless you increase its value.
Why does it do this ?
http://nginx.org
34 matches
Mail list logo