Às 00:58 de 09/11/2024, Val escreveu:
Hi All,
I am reading data file ( > 1B rows) and do some date formatting like
dat=fread(mydatafile)
dat$date1 <- as.Date(ymd(dat$date1))
However, I am getting an error message saying that
Error: cons memory exhausted (limit reached?)
The
Thank you, I will take a look.
On Fri, Nov 8, 2024 at 8:09 PM Ben Bolker wrote:
>
> Check the "high performance task view" on CRAN ...
> https://cran.r-project.org/web/views/HighPerformanceComputing.html
>
> On Fri, Nov 8, 2024, 7:58 PM Val wrote:
>>
>> Hi All,
>>
>> I am reading data file ( >
Thank you Jeff for the tip! I don't think I have 4 times as much
free memory to process data... .
I allocated the max memory of the system.has.
On Fri, Nov 8, 2024 at 8:30 PM Jeff Newmiller wrote:
>
> There is always an implied "and do computations on it before writing the
> processed data o
There is always an implied "and do computations on it before writing the
processed data out" when reading chunks of a file.
And you would almost certainly not be getting that error if you were not out of
memory. A good rule of thumb is that you need 4 times as much free memory to
process data
Is the problem reading the file in or processing it after it has been read
in?
Bert
On Fri, Nov 8, 2024 at 5:13 PM Jeff Newmiller via R-help <
r-help@r-project.org> wrote:
> Can you tell us what is wrong with the "chunked" package which comes up
> when you Google "r read large file in chunks"?
>
Check the "high performance task view" on CRAN ...
https://cran.r-project.org/web/views/HighPerformanceComputing.html
On Fri, Nov 8, 2024, 7:58 PM Val wrote:
> Hi All,
>
> I am reading data file ( > 1B rows) and do some date formatting like
> dat=fread(mydatafile)
> dat$date1 <- as.Da
Hi Jeff,
Memory was not an issue. The system only used 75% of the memory
allocated for the job.
I am trying to understand what "r read large file in chunks" is doing.
On Fri, Nov 8, 2024 at 7:50 PM Jeff Newmiller wrote:
>
> Then you don't have enough memory to process the whole thing at once.
Then you don't have enough memory to process the whole thing at once. Not
unlike stuffing your mouth with cookies and not being able to chew for lack of
space to move the food around in your mouth.
Now, can you answer my question?
On November 8, 2024 5:38:37 PM PST, Val wrote:
>The data was r
The data was read. The problem is with processing.
On Fri, Nov 8, 2024 at 7:30 PM Bert Gunter wrote:
>
> Is the problem reading the file in or processing it after it has been read in?
>
> Bert
>
> On Fri, Nov 8, 2024 at 5:13 PM Jeff Newmiller via R-help
> wrote:
>>
>> Can you tell us what is wr
Can you tell us what is wrong with the "chunked" package which comes up when
you Google "r read large file in chunks"?
On November 8, 2024 4:58:18 PM PST, Val wrote:
>Hi All,
>
>I am reading data file ( > 1B rows) and do some date formatting like
> dat=fread(mydatafile)
> dat$date1 <- a
Hi All,
I am reading data file ( > 1B rows) and do some date formatting like
dat=fread(mydatafile)
dat$date1 <- as.Date(ymd(dat$date1))
However, I am getting an error message saying that
Error: cons memory exhausted (limit reached?)
The script was working when the number rows we
hjust = 1))
Jeff
From: Bert Gunter
Sent: Sunday, October 14, 2018 10:51 PM
To: reichm...@sbcglobal.net
Cc: R-help
Subject: Re: [R] limit bar graph output
If I understand correctly, just subset your sorted data.
e.g. :
x <- runif(50)
## 50 unsorted values
sort(x, dec = T
A reproducible example would help here (you cannot assume we know what type
"miRNA" is) but guessing from the use of "reorder" I suspect it is a factor. In
which case after you subset you will need to use the droplevels function to
remove the unused levels, and then plot that prepared data.
On
If I understand correctly, just subset your sorted data.
e.g. :
x <- runif(50)
## 50 unsorted values
sort(x, dec = TRUE)[1:10]
## the 10 biggest
-- Bert
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley B
R-Help Forum
I'm using the following code to reorder (from highest to lowest) my miRNA
counts. But there are 500 plus and I only need the first (say) 15-20. How
do I limit ggplot to only the first 20 miRNA counts
ggplot(data = corr.m, aes(x = reorder(miRNA, -value), y = value, fill =
variable))
I am developing an application that opens an image in a new window using, at
times, windows(). I don't want the user to be able to resize the window (and
distort the image). The new window contains a menu item called "resize" that
contains three options - "R mode", "Fit to window", and "Fixed si
This is the wrong place to ask what RStudio can or cannot do. However, if your
question is about R you should try invoking your reproducible example in RGui
or the command line R.exe before posting here.
R has no directory depth limit. There is an operating system limit on returning
paths more
A quick question - is there a limit to the number of levels one can go down
when setting the directory in R studio?
I ask because I have been trying to set the directory to a folder 8 levels down
which R studio won't allow, and when I try to set the directory through
Session/Set Working Director
> Hello everybody,
>
> Using ggplot2 package, is there a way to force to stop the y-axis line
> at a specified point ? (not using ylim because I want that some text
> written using annotate() at the top of the graph is
> still shown).
>
> Bellow is a simple example to show
Hi
Here are some alternatives that involve messing about with the grobs and
viewports after the plot has been drawn. The percentage optimality of
these solutions is up for debate ...
###
# Edit the y-axis line after
After many tries, here is a solution using grob.
I post here in case it could help someone.
Note that this solution is not 100% optimal as it uses a trick (limits =
c(-0.05, 1.02)) to show fully the points.
Marc
library("ggplot2"); require("gtable"); require("grid")
p <- ggplot()+
geom_poin
Hello everybody,
Using ggplot2 package, is there a way to force to stop the y-axis line
at a specified point ? (not using ylim because I want that some text
written using annotate() at the top of the graph is still shown).
Bellow is a simple example to show what I would like do:
Thanks a lot
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Kawashima, Masayuki
Sent: Wednesday, November 5, 2014 10:51 PM
To: r-help@r-project.org
Subject: [R] limit of cmdscale function
Hi
We have a few questions regarding the use of the "isoMDS" function.
When we run "isoMDS" function using 60,000 x 60,000 data matrix,
we get the following error message:
cmdscale(d, k) : invalid value of 'n'
Calls: isoMDS -> cmdscale
--
On Aug 29, 2012, at 10:23 AM, Siddeek, Shareef (DFG) wrote:
Hi,
Can someone help me on the following problem?
I have nearly 103,000 records with 22 variables (some are factors).
When I ran a stepwise glm on the data set, the R program stops with
an error message
Stepwise methods are st
Hi,
Can someone help me on the following problem?
I have nearly 103,000 records with 22 variables (some are factors). When I ran
a stepwise glm on the data set, the R program stops with an error message
"Error: cannot allocate vector of size 171.3Mb."
I have an R 2.12.0 on my PC (Windows XP)
Dear all,
I am trying to apply the logistic regression to determine the limit of
detection (LOD) of a molecular biology assay, the polymerase chain reaction
(PCR). The aim of the procedure is to identify the value (variable
"dilution") that determine a 95% probability of success, that is
"posit
Dear Ellison,
You are right, now the figure is good! Question solved.
Thank you very much!
Best wishes,
Luigi
-Original Message-
From: S Ellison [mailto:s.elli...@lgcgroup.com]
Sent: 24 July 2012 10:20
To: Luigi; r-help@r-project.org
Subject: RE: [R] limit of detection (LOD) by logistic
> set 1; however the figure obtained from the sample set 2
> shows that interpolation is not correct.
I don't think the interpolation is incorrect; what is making it look incorrect
is using a straight line to represent a logistic regression.
Try adding the predicted values for the line to your p
Dear all,
I am trying to apply the logistic regression to determine the limit of
detection (LOD) of a molecular biology assay, the polymerase chain reaction
(PCR). The aim of the procedure is to identify the value (variable
"dilution") that determine a 95% probability of success, that is
"positive
Hello everybody,
I hope you can give me some help with limiting the ranges in x, y, and z for a
hexbin plot. All I have found on the net is an unanswered message to this list
from last year, so I hope my problem is not too stupid.
I would like to plot some data using hexagonal binning. Currently
On Aug 16, 2011, at 1:33 PM, Sarah Goslee wrote:
Hi Noah,
On Tue, Aug 16, 2011 at 1:25 PM, Noah Silverman > wrote:
Hello,
I'm trying to read in a fairly large file into R, and am getting an
odd error (65000 rows, 37 columns)
Error in scan(file, what, nmax, sep, dec, quote, skip, nlines,
Hi Noah,
On Tue, Aug 16, 2011 at 1:25 PM, Noah Silverman wrote:
> Hello,
>
> I'm trying to read in a fairly large file into R, and am getting an odd error
> (65000 rows, 37 columns)
>
> Error in scan(file, what, nmax, sep, dec, quote, skip, nlines, na.strings, :
> line 25628 did not have 37 el
Hello,
I'm trying to read in a fairly large file into R, and am getting an odd error
(65000 rows, 37 columns)
Error in scan(file, what, nmax, sep, dec, quote, skip, nlines, na.strings, :
line 25628 did not have 37 elements
That line DOES have 37 elements. As A test, I tried deleting it, a
Dear All,
I am using the code below to calculate error bars. I note that the length of
the error bars can be varied by varying the constant (0.975). It does
appear that any number can be substituted for 0.975, making it confusing for
me to know how to quantify the error bars.
I wish to quantify
On Sat, 4 Sep 2010, raje...@cse.iitm.ac.in wrote:
Hi,
I have the following piece of code,
repeat{
ss<-read.socket(sockfd);
if(ss=="") break
output<-paste(output,ss)
}
but somehow, output is not receiving all the data that is coming through the
socket.My suspicion is on the if statement. wh
Hi,
I have the following piece of code,
repeat{
ss<-read.socket(sockfd);
if(ss=="") break
output<-paste(output,ss)
}
but somehow, output is not receiving all the data that is coming through the
socket.My suspicion is on the if statement. what happens if a white space
occurs in between the st
On Sat, Feb 6, 2010 at 8:53 AM, Pete Shepard wrote:
> I am using t-test to check if the difference between two populations is
> significant. I have a large N=20,000, 10,000 in each population. I compare a
> few different populations with each other and though I get different t-scores,
> I get the
Hello,
I am using t-test to check if the difference between two poulations is
significant. I have a large N=20,000, 10,000 in each population. I compare a
few different poulations with eachother and though I get different t-scores,
I get the same p-value of 10^-16 which seems like the limit for t
Adam Waldemar Kowalewski wrote:
Hello,
I've been writing a program in C that will be called by R. I seem to have
stumbled upon an odd error that seems to suggest there is a limit on the
number of times "Realloc" (the R version as defined in the manual
"R-extenstions" not the C version "realloc")
Hello,
I've been writing a program in C that will be called by R. I seem to have
stumbled upon an odd error that seems to suggest there is a limit on the
number of times "Realloc" (the R version as defined in the manual
"R-extenstions" not the C version "realloc") when I try to use the
following p
Thank you Uwe and Prof. Ripley.
The problem was solved. The row in question indeed have garbage data,
which possibly was truncating the number of lines read. I apologise
for the oversight.
Thank you once again.
Regards
Harsh Singhal
Bangalore, India
On Tue, Dec 2, 2008 at 2:50 PM, Prof Brian Ri
Take a look at your dataset at around that row. Perhaps you have an
unmatched quote?
The limit on the number of rows of a data frame is far larger than 100,000
(2^31-1, but you will run out of address space on a 32-bit platform before
that - see ?"Memory-limits").
On Tue, 2 Dec 2008, Harsh
Harsh wrote:
Hello,
I am trying to read a dataset with 100,000 rows and around 365 columns
into R, using read.table/read.csv.
In Windows XP, with R 32 bit, I am able to read only 15266 rows and
not more than that.
I tried the same in R running in Ubuntu and it does the same and reads
only 15266
Hello,
I am trying to read a dataset with 100,000 rows and around 365 columns
into R, using read.table/read.csv.
In Windows XP, with R 32 bit, I am able to read only 15266 rows and
not more than that.
I tried the same in R running in Ubuntu and it does the same and reads
only 15266 rows.
Using the
On Thu, 5 Jun 2008, [EMAIL PROTECTED] wrote:
I have (below) an attempt at an R script to find the limit distribution
of
a continuous-time Markov process, using the formulae outlined at
http://www.uwm.edu/~ziyu/ctc.pdf, page 5.
First, is there a better exposition of a practical algorithm for do
I have (below) an attempt at an R script to find the limit distribution
of
a continuous-time Markov process, using the formulae outlined at
http://www.uwm.edu/~ziyu/ctc.pdf, page 5.
First, is there a better exposition of a practical algorithm for doing
this? I have not found an R package that
47 matches
Mail list logo