Hi Jan;
Thanks so much. It is much appreciated. The problem has been solved.
Regards,
Greg
On Mon, Sep 24, 2018 at 3:05 PM Jan T Kim wrote:
> hmm... I don't see the quote="" paraneter in your read.csv call
>
>
> Best regards, Jan
> --
> Sent from my mobile. Apologies for typos and terseness
>
hmm... I don't see the quote="" paraneter in your read.csv call
Best regards, Jan
--
Sent from my mobile. Apologies for typos and terseness
On Mon, Sep 24, 2018, 20:40 greg holly wrote:
> Hi Jan;
>
> Thanks so much for this. Yes, I did. Her is my code to read
> data: a<-read.csv("for_R_graphs.
Hi Jan;
Thanks so much for this. Yes, I did. Her is my code to read
data: a<-read.csv("for_R_graphs.csv", header=T, sep=",")
On Mon, Sep 24, 2018 at 2:07 PM Jan T Kim via R-help
wrote:
> Yet one more: have you tried adding quote="" to your read.table
> parameters? Quote characters have a 50% ch
Hi Bert;
Thanks for writing. Here are my answers to your questions:
Regards,
Greg
1. What is your OS? What is your R version? *The version is 3.5.0*
2. How do you know that your data has 151 rows? *Because I looked in excel
also I work on the same data in SAS*
3. Are there stray chara
Yet one more: have you tried adding quote="" to your read.table
parameters? Quote characters have a 50% chance of being balanced,
and they can encompass multiple lines...
On Mon, Sep 24, 2018 at 11:40:47AM -0700, Bert Gunter wrote:
> One more question:
>
> 5. Have you tried shutting down, restart
One more question:
5. Have you tried shutting down, restarting R, and rereading?
-- Bert
On Mon, Sep 24, 2018 at 11:36 AM Bert Gunter wrote:
> *Perhaps* useful questions (perhaps *not*, though):
>
> 1. What is your OS? What is your R version?
> 2. How do you know that your data has 151 rows?
>
*Perhaps* useful questions (perhaps *not*, though):
1. What is your OS? What is your R version?
2. How do you know that your data has 151 rows?
3. Are there stray characters -- perhaps a stray eof -- in your data? Have
you checked around row 96 to see what's there?
4. Are the data you did get in R
Hi Jim,
With a little dig on my side , I have found the issue as to why the
script is skipping that file. The file is "ISO-8859 text, with CRLF
line terminators"
The file should be ASCII and I changed using dos2unix and CRLF line
terminators is eliminated but still I am not reading it. How can
You need to provide reproducible data. What does the file contain? Why
are you using 'sep=' when reading fixed format. You might be able to
attach the '.txt' to your email to help with the problem. Also you did not
state what the differences that you are seeing. So help us out here.
Jim Holt
Try asking on R-sig-geo mailing list
Also, state what package(s) you are using, and include what you have already
tried.
-Don
--
Don MacQueen
Lawrence Livermore National Laboratory
7000 East Ave., L-627
Livermore, CA 94550
925-423-1062
On 1/19/17, 10:53 AM, "R-help on behalf of lily li"
w
Hi Ed,
I'm not sure I understand, but can't you rwad the files one by one and
create one data.frane using rbind?
Is easy to put do in a loop too.
Best wishes,
Ulrik
On Thu, 2 Jun 2016, 20:23 Ed Siefker, wrote:
> I have many data files named like this:
>
> E11.5-021415-dko-1-1-masked-bottom-a
Thanks, Dan.
Your codes work fine. But I have tens of countries UK, JP, BR, US...,
each of which has ten columns a1, a2, ..., a10 of data. So a little more
automation is needed.
I have been trying to make a list of each country's data and use sapply
thing
to get
UK JP
2009 Q2
Not a guru, but this isn't that hard. The following works with your sample
data. It shouldn't be too difficult to modify for your full file.
library(zoo)
df <- read.table('path_to_your_data', sep=';', skip=2, as.is=TRUE)
str(df)
substr(df$V1,5,5) <- '-'
df$V1 <- as.yearqtr(substr(df$V1,1,6))
df
On Tue, Nov 18, 2014 at 9:42 PM, Upananda Pani wrote:
> Dear All,
>
> I want to read the my time series data using XTS package and then to
> calculate return using PeformanceAnalytics Package but i am getting the
> following error. Please help me to solve the problem. The error follows:
>
> # Req
You did not read the data with the commands you provided since c1 is not
defined so read.fwf() fails immediately. Here is a solution that works for the
link you provided, but would need to be modified for months that do not have 30
days:
> lnk <-
> "http://www.data.jma.go.jp/gmd/env/data/radia
On Tue, 22 Apr 2014, William Dunlap wrote:
For me that other software would probably be Octave. I'm interested if
anyone here has read in these files using Octave, or a C program or
anything else.
I typed 'octave read binary file' into google.com and the first hit was
the Octave help file f
> For me that other software would probably be Octave. I'm interested if
> anyone here has read in these files using Octave, or a C program or
> anything else.
I typed 'octave read binary file' into google.com and the first hit was
the Octave help file for its fread function. In C fread is also
I got it:
library(rjson)
library(plyr)
test<-fromJSON(file=url("http://api.census.gov/data/2010/sf1?key=mykey&get=P0030001,NAME&for=county:*&in=state:48";))
test2<-ldply(test)[-1,]
names(test2)<-ldply(test)[1,]
head(test2)
P0030001 NAME state county
258458 Anderson County48
Wireless 4G LTE Smartphone
>
>
>
> Original message
> From: Baro
> Date: 11/04/2013 09:26 (GMT-05:00)
> To: "Adams, Jean"
> Cc: R help
> Subject: Re: [R] Reading data from Excel file in r
>
>
> thanks alot, but now I have another problem:
quot;
Cc: R help
Subject: Re: [R] Reading data from Excel file in r
thanks alot, but now I have another problem: my Excel file is very big and
I get this error, which says:
Error: OutOfMemoryError (Java): Java heap space
Is there any way to read each value one by one and save them in an array
thanks, I changed my code, but still have the same problem :/
On Mon, Nov 4, 2013 at 6:49 AM, Adams, Jean wrote:
> Perhaps the discussion at this link will help ... (see especially the
> second answer).
>
>
> http://stackoverflow.com/questions/7963393/out-of-memory-error-java-when-using-r-and-x
Perhaps the discussion at this link will help ... (see especially the
second answer).
http://stackoverflow.com/questions/7963393/out-of-memory-error-java-when-using-r-and-xlconnect-package
Jean
On Mon, Nov 4, 2013 at 8:26 AM, Baro wrote:
>
> thanks alot, but now I have another problem: my Exc
thanks alot, but now I have another problem: my Excel file is very big and
I get this error, which says:
Error: OutOfMemoryError (Java): Java heap space
Is there any way to read each value one by one and save them in an array?
On Mon, Nov 4, 2013 at 6:13 AM, Adams, Jean wrote:
> You can use t
You can use the XLConnect package to read in a range of rows and columns,
then define a function to subset the odd rows. For example,
library(XLConnect)
wb <- loadWorkbook("C:/temp/MyData.xls")
dat <- readWorksheet(wb, sheet=getSheets(wb)[1], startRow=1, endRow=139,
startCol=5, endCol=5)
dat <- r
Take a look at the "XLConnect" package. I use it for all the
reading/writing for Excel files.
Jim Holtman
Data Munger Guru
What is the problem that you are trying to solve?
Tell me what you want to do, not how you want to do it.
On Mon, Nov 4, 2013 at 8:47 AM, Baro wrote:
> Hi experts,
>
> I
Hi,
It would be better to give an example.
If your dataset is like the one attached:
con<-file("Trial1.txt")
Lines1<- readLines(con)
close(con)
#If the data you wanted to extract is numeric and the header and footer are
characters,
dat1<-read.table(text=Lines1[-grep("[A-Za-z]",Lines1)],sep="\t",
Hi,
I tried to read your data from the image:
OPENCUT<- read.table("OpenCut.dat",header=TRUE,sep="\t")
OPENCUT
FC LC SR DM
1 400030.34 1323.5 0 400
2 12680.13 2.5 0 180
3 472272.75 2004.7 3 300
4 332978.03 1301.3 106 180
5 98654.20 295.0 0 180
6 68142.05 259.9
Hi,
Try this:
files<-paste("MSMS_",23,"PepInfo.txt",sep="")
read.data<-function(x) {names(x)<-gsub("^(.*)\\/.*","\\1",x);
lapply(x,function(y) read.table(y,header=TRUE,sep =
"\t",stringsAsFactors=FALSE,fill=TRUE))}
lista<-do.call("c",lapply(list.files(recursive=T)[grep(files,list.files(recursiv
Hi,
I am not able to open your graph. I am using linux.
Also, the codes in the function are not reproducible
directT <- direct[grepl("^t", direct)]
directC <- direct[grepl("^c", direct)]
It takes double the time to know what is going on.
dir()
#[1] "a1" "a2" "a3" "b1" "b2" "c1"
direct<- list
Hi Vera,
Not sure I understand your question.
Your statement
"In my lista I can´t merge rows to have the group, because the idea is
for each file count frequencies of mm, when b<0.01. after that I
want a graph like the graph in attach."
files<-paste("MSMS_",23,"PepInfo.txt",sep="")
read.dat
HI Vera,
No problem. I am cc:ing to r-help.
A.K.
From: Vera Costa
To: arun
Sent: Sunday, February 17, 2013 5:44 AM
Subject: Re: reading data
Hi. Thank you. It works now:-)
And yes, I use windows.
Thank you very much.
No dia 17 de Fev de 2013 00:44, "ar
Hi,
Try by putting quotes ie.
res<- do.call("c",...)
A.K.
From: Vera Costa
To: arun
Sent: Saturday, February 16, 2013 7:10 PM
Subject: Re: reading data
Thank you.
In mine, I have an error " 'what' must be a character string or a function".
I need to do
HI,
No problem.
?c() for concatenate to vector or list().
If I use do.call(cbind,..) or do.call(rbind,...)
do.call(cbind,lapply(list.files(recursive=T)[grep("m11kk",list.files(recursive=T))],function(x)
{names(x)<-gsub("^(.*)\\/.*","\\1",x); lapply(x,function(y)
read.table(y,header=TRUE,stri
HI,
Just to add:
res<-do.call(c,lapply(list.files(recursive=T)[grep("m11kk",list.files(recursive=T))],function(x)
{names(x)<-gsub("^(.*)\\/.*","\\1",x); lapply(x,function(y)
read.table(y,header=TRUE,stringsAsFactors=FALSE,fill=TRUE))})) #it seems like
one of the rows of your file doesn't
Hi,
#working directory data1 #changed name data to data1. Added some files in each
of sub directories a1, a2, etc.
indx1<- indx[indx!=""]
lapply(indx1,function(x) list.files(x))
#[[1]]
#[1] "a1.txt" "m11kk.txt"
#[[2]]
#[1] "a2.txt" "m11kk.txt"
#[[3]]
#[1] "a3.txt"
Hi !
You need to assign the output of read.table() into an object; this is
how R works:
mydata <- read.table ("mydata1.csv", sep=",", header=T)
mymean <- mean(mydata$var)
You should read some introductory material.
I found this useful:
http://www.burns-stat.com/pages/Tutor/hints_R_begin.html
What was the exact syntax?
read.table> ("mydata1.csv", sep=",", header=T)
will read the data but not save anything.
mydat <-read.table ("mydata1.csv", sep=",", header=T)
give you a data.frame called mydat.
mean(mydat$X) should give you the mean of X
John Kane
Kingston ON Canada
> -O
You need to assign your data set to something -- right now you're just
reading it in and then throwing it away:
dats <- read.csv("mydata1.csv")
mean(dats$X) # Dollar sign, not ampersand
Best,
Michael
On Tue, May 15, 2012 at 8:57 AM, jacaranda tree wrote:
> Hi I am really new using R, so this i
hello,
The error message is right, you have read the file have NOT assigned it to
an object, to a variable.
mydata1 <- read.table ("mydata1.csv", sep=",", header=T)
Now you can use the variable 'mydata1'. It's a data.frame, and you can see
what it looks like with the following instructions.
st
Well, if your problem is that a workspace is being loaded automatically
and you don't want that workspace, you have several options:
1. Use a different directory for each project so that the file loaded
by default is the correct one.
2. Don't save your workspace, but regenerate it each time.
3.
Thanks Sarah. I have read about the problems with attach(), and I
will try to avoid it.
I have now found the line that's causing the problem is:
>setwd("z:/homework")
With that line in place, either in a program or in Rprofile.site (?),
then the moment I run R and simply enter (before reading a
Hi,
The obvious answer is don't use attach() and you'll never have
that problem. And see further comments inline.
On Tue, Nov 15, 2011 at 6:05 PM, Steven Yen wrote:
> Can someone help me with this variable/data reading issue?
> I read a csv file and transform/create an additional variable (calle
A follow-up on the data/variable issue I posted earlier:
Here was what I did, which might was obviously causing the problem:
I inserted the following line in my file "Rprofile.site":
setwd("z:/R")
Then, as soon as I run R (before I read any data) I issue
summary(mydata)
I get summary statis
Got it. Thanks!
On Mon, Oct 17, 2011 at 9:40 AM, Prof Brian Ripley wrote:
> On Mon, 17 Oct 2011, Brian Smith wrote:
>
> Hi,
>>
>> I had a large file for which I require a subset of rows. Instead of
>> reading
>> it all into memory, I use the awk command to get the relevant rows.
>> However,
>> I
On Mon, 17 Oct 2011, Brian Smith wrote:
Hi,
I had a large file for which I require a subset of rows. Instead of reading
it all into memory, I use the awk command to get the relevant rows. However,
I'm doing it pretty inefficiently as I write the subset to disk, before
reading it into R. Is ther
On Mon, Oct 17, 2011 at 9:23 AM, Brian Smith wrote:
> Hi,
>
> I had a large file for which I require a subset of rows. Instead of reading
> it all into memory, I use the awk command to get the relevant rows. However,
> I'm doing it pretty inefficiently as I write the subset to disk, before
> readi
m quite sure, that somebody there should be able to help.
Rainer
>
> Regards,
>
> Esteban
>
>
>
>
> De: David Winsemius [mailto:dwinsem...@comcast.net] Enviado el:
> mié 21/09/2011 17:08 Para: ESTEBAN ALFARO CORTES CC:
> r-he
Thanks Cesar,
Any idea for this contents of the file?
;; positive examples represent people that were granted credit
(def-pred credit_screening :type (:person)
:pos
((s1) (s2) (s4) (s5) (s6) (s7) (s8) (s9) (s14) (s15) (s17) (s18) (s19)
(s21) (s22) (s24) (s28) (s29) (s31) (s32) (s3
: ESTEBAN ALFARO CORTES
CC: r-help@r-project.org
Asunto: Re: [R] Reading data in lisp format
If you think that R is loosely typed, then examining LiSP code will
change your mind, or at least give you a new data point further out on
the "Loose-Tight" axis. I think you will need to do the
If you think that R is loosely typed, then examining LiSP code will
change your mind, or at least give you a new data point further out on
the "Loose-Tight" axis. I think you will need to do the processing "by
hand".
The organization of the data is fairly clear. There are logical
colum
Em 21/9/2011 07:39, ESTEBAN ALFARO CORTES escreveu:
Hi,
I am trying to read the "credit.lisp" file of the Japanese credit database in
UCI repository, but it is in lisp format which I do not know how to read. I have not
found how to do that in the foreign library
http://archive.ics.uci.edu
If you know how many lines to skip, you can set skip=xx in read.table.
The question is what you can do if you have variable lines to skip in
various files but you have characters indicating the begining of the
data, like ~A. What you can do is get the file in using readLines,
use grep to find the
use readLines to read in the entire file, find your pattern of where your data
starts and then write the data starting there using writeLines to a temporary
file and now you can just read in that file using read.table; you will have
'skipped' the extra header data.
Sent from my iPad
On Aug 30,
Hi Duncan
Your method works well for my situation when I make only one call to the
database/URL with the login info. Our database is configured like the
first situation (cookies) that you described below. Now, I will need to
make multiple successive calls to get data for different sites in the
Hi Steve
RCurl can help you when you need to have more control over Web requests.
The details vary from Web site to Web site and the different ways to specify
passwords, etc.
If the JSESSIONID and NCES_JSESSIONID are regular cookies and returned in the
first
request as cookies, then you can ju
Greg that's it!
Thank you thank you thank you
So simple in the end?
> From: greg.s...@imail.org
> To: h_a_patie...@hotmail.com; r-help@r-project.org
> Date: Tue, 31 May 2011 10:27:13 -0600
> Subject: RE: [R] Reading Data from mle into excel?
>
y 31, 2011 9:40 AM
> To: r-help@r-project.org
> Subject: Re: [R] Reading Data from mle into excel?
>
>
> Hi Greg,
>
> I have about 40 time series each of which I have to run a seperate MLE
> on. I will be experimenting with different starting values for the
> parameter
cal Data Center
Intermountain Healthcare
[hidden email]
801.408.8111
> -Original Message-
> From: [hidden email] [mailto:r-help-bounces@r-
> project.org] On Behalf Of Bazman76
> Sent: Tuesday, May 31, 2011 9:04 AM
> To: [hidden email]
> Subject: Re: [R] Reading Data
Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
> project.org] On Behalf Of Bazman76
> Sent: Tuesday, May 31, 2011 9:04 AM
> To: r-help@r-project.o
Can I use sink() to transfer the MLE results which are a S4 type object to a
text file?
Can someone show me how to do this?
--
View this message in context:
http://r.789695.n4.nabble.com/Reading-Data-from-mle-into-excel-tp3545569p3563385.html
Sent from the R help mailing list archive at Nabble.
thanks for all your help
I have taken a slightly different route but I think I am getting there
library(plyr)
#setwd("C:/Documents and Settings/Hugh/My Documents/PhD")
#files<-list.files("C:/Documents and Settings/Hugh/My
Documents/PhD/",pattern="Swaption Vols.csv")
#vols <- lapply(files, read
I would read the datasets into a list first, something like this which will
make a list of dataframes:
filenames <- dir() # where only filenames you want to read in are in this
directory
dataframelist <- lapply(filenames, read.csv, header = TRUE, sep = ",")
You should be able to put the whol
Hi Scott,
Thanks for this.
Got some questions below:
Thanks
Hugh
Date: Mon, 23 May 2011 17:32:52 -0500
From: scttchamberla...@gmail.com
To: h_a_patie...@hotmail.com
CC: r-help@r-project.org
Subject: Re: [R] Reading Data from mle into excel?
I would read the datasets into a list
I think cognizance should be taken of fortune("very uneasy").
cheers,
Rolf Turner
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guid
Hi:
This isn't too hard to do. The strategy is basically this:
(1) Create a list of file names. (See ?list.files for some ideas)
(2) Read the data files from (1) into a list.
(3) Create a function to apply to each data frame in the list.
(4) Apply the function to each data frame.
(5) Extract the
try this:
> input <- readLines(textConnection("a1 89 2
> 79 392
+ b 3 45 4 65"))
> closeAllConnections()
> # now parse each line to create a dataframe with each row being the score
> result <- NULL
> for
Hi
r-help-boun...@r-project.org napsal dne 18.06.2010 14:00:47:
> Surely you could also save the excel spreadsheet with the relevant data
as a
> text file, and then read it into R as normal?
> Select "save as" in Excel and then change "save as type" to "Text (Tab
> delimited)(*.txt)".
>
> Sa
If you're on windows and you never installed perl, then you don't have
it. Another easy way to find out is to type "perl" in the search
window under the start menu. If there's no perl.exe on your computer,
you don't have it.
Take a look at : http://www.perl.org/
If you download Perl, it doesn't r
Surely you could also save the excel spreadsheet with the relevant data as a
text file, and then read it into R as normal?
Select "save as" in Excel and then change "save as type" to "Text (Tab
delimited)(*.txt)".
Save it in the directory you are using in R, (or change the directory in R to
w
Hi
r-help-boun...@r-project.org napsal dne 16.06.2010 22:14:33:
> Thanks for your reply. Possibly I donot have perl. I am not sure
although.
> How I can find whether I have it? If I dont have it then where can I
> download it from?
Do you have Excel? If yes you can
Open Excel
Select data you w
Thanks for your reply. Possibly I donot have perl. I am not sure although.
How I can find whether I have it? If I dont have it then where can I
download it from?
On Thu, Jun 17, 2010 at 12:57 AM, Barry Rowlingson <
b.rowling...@lancaster.ac.uk> wrote:
> On Wed, Jun 16, 2010 at 7:29 PM, Christofe
On Wed, Jun 16, 2010 at 7:29 PM, Christofer Bogaso
wrote:
> Can anyone help me how to read xls file into R. I have tried following
>
> library(gdata)
> xlsfile <- file.path(.path.package('gdata'),'xls','iris.xls')
> read.xls(xlsfile)
>
> I got following error:
> Converting xls file to csv file...
On Wed, Jun 16, 2010 at 2:29 PM, Christofer Bogaso
wrote:
> Can anyone help me how to read xls file into R. I have tried following
>
> library(gdata)
> xlsfile <- file.path(.path.package('gdata'),'xls','iris.xls')
> read.xls(xlsfile)
>
> I got following error:
> Converting xls file to csv file...
Ah, I should have mentioned this. Personally I work on Macs (Leopard)
and PC's (XP Pro and XP Pro x64). Even though the PC's do have Cygwin,
I'm trying to make this code portable. So I want to avoid such things as
sed, perl, etc.
I want to do this in R, even if processing is a bit slower. Event
I tried to shoehorn the read.* functions and match both the fixed width and
the variable width fields
in the data but it doesn't seem evident to me. (read.fwf reads fixed width
data properly but the rest
of the fields must be processed separately -- maybe insert NULL stubs in the
remaining fields a
Here is a continuation to turn DF into a zoo series: It depends on
the fact that all NAs are structural, i.e. they indicate dates which
cannot exist such as Feb 31 as opposed to missing data. dd is the
data as one long series with component names being the dates in the
indicated format. That is
On Feb 27, 2010, at 6:17 PM, Phil Spector wrote:
Tim -
I don't understand what you mean about interleaving rows. I'm
guessing
that you want a single large data frame with all the data, and not a
list with each year separately. If that's the case:
x = read.table('http://climate.arm.ac.
Tim -
I don't understand what you mean about interleaving rows. I'm guessing
that you want a single large data frame with all the data, and not a
list with each year separately. If that's the case:
x =
read.table('http://climate.arm.ac.uk/calibrated/soil/dsoil100_cal_1910-1919.dat',
On Feb 27, 2010, at 4:33 PM, Gabor Grothendieck wrote:
No one else posted so the other post you are referring to must have
been an email to you, not a post. We did not see it.
By one off I think you are referring to the row names, which are
meaningless, rather than the day numbers. The data
Sorry, I forgot to cc the group:
Tim -
Here's a way to read the data into a list, with one entry per year:
x =
read.table('http://climate.arm.ac.uk/calibrated/soil/dsoil100_cal_1910-1919.dat',
header=FALSE,fill=TRUE,skip=13)
cts = apply(x,1,function(x)sum(is.na(x)))
wh = whic
No one else posted so the other post you are referring to must have
been an email to you, not a post. We did not see it.
By one off I think you are referring to the row names, which are
meaningless, rather than the day numbers. The data for day 1 is
present, not missing. The example code did re
Thanks, Gabor. My take away from this and Phil's post is that I'm
going to have to construct some code to do the parsing, rather than
use a standard function. I'm afraid that neither approach works, yet:
Gabor's gets has an off-by-one error (days start on the 2nd, not the
first), and the ye
Mark Leeds pointed out to me that the code wrapped around in the post
so it may not be obvious that the regular expression in the grep is
(i.e. it contains a space):
"[^ 0-9.]"
On Sat, Feb 27, 2010 at 7:15 AM, Gabor Grothendieck
wrote:
> Try this. First we read the raw lines into R using grep t
Try this. First we read the raw lines into R using grep to remove any
lines containing a character that is not a number or space. Then we
look for the year lines and repeat them down V1 using cumsum. Finally
we omit the year lines.
myURL <- "http://climate.arm.ac.uk/calibrated/soil/dsoil100_cal
On Wed, Oct 28, 2009 at 1:08 PM, David Winsemius wrote:
>
> On Oct 28, 2009, at 12:21 PM, Val wrote:
>
>
>
> On Wed, Oct 28, 2009 at 11:59 AM, David Winsemius
> wrote:
>
>>
>> On Oct 28, 2009, at 11:46 AM, Val wrote:
>>
>> Val, please take it slow, you are missing basic stuff here.
>>>
On Oct 28, 2009, at 12:21 PM, Val wrote:
>
>
> On Wed, Oct 28, 2009 at 11:59 AM, David Winsemius > wrote:
>
> On Oct 28, 2009, at 11:46 AM, Val wrote:
>
> Val, please take it slow, you are missing basic stuff here.
>
> (1) Windows Explorer may hide extensions; the 'Type' column should
> read 'R
On Wed, Oct 28, 2009 at 11:59 AM, David Winsemius wrote:
>
> On Oct 28, 2009, at 11:46 AM, Val wrote:
>
> Val, please take it slow, you are missing basic stuff here.
>>
>>>
>>> (1) Windows Explorer may hide extensions; the 'Type' column should
>>> read 'R file' anyway.
>>>
>>>
>> * Yes I looked
On Oct 28, 2009, at 11:46 AM, Val wrote:
Val, please take it slow, you are missing basic stuff here.
(1) Windows Explorer may hide extensions; the 'Type' column should
read 'R file' anyway.
* Yes I looked at it and it only shows type. To check I downloaded
another script with R extens
David Winsemius wrote:
On Oct 28, 2009, at 10:55 AM, Val wrote:
The working directory is
getwd()
[1] "C:/Documents and Settings/Val/My Documents"
The data file(Rossi.dat) and the script(Rossi.R) are in
"C:/Documents and Settings/Val/My Documents/R_data/prd"
So you are not giving a prope
Val, please take it slow, you are missing basic stuff here.
>
> (1) Windows Explorer may hide extensions; the 'Type' column should
> read 'R file' anyway.
>
* Yes I looked at it and it only shows type. To check I downloaded
another script with R extension "test.R" and the type column shows th
On Oct 28, 2009, at 10:55 AM, Val wrote:
The working directory is
getwd()
[1] "C:/Documents and Settings/Val/My Documents"
The data file(Rossi.dat) and the script(Rossi.R) are in
"C:/Documents and Settings/Val/My Documents/R_data/prd"
So you are not giving a proper path when you issue the
Val, please take it slow, you are missing basic stuff here.
(1) Windows Explorer may hide extensions; the 'Type' column should
read 'R file' anyway.
(2) Script files are included in your workspace with the comand source().
Please type ?source for details.
(3) You should call files with their pat
The working directory is
> getwd()
[1] "C:/Documents and Settings/Val/My Documents"
The data file(Rossi.dat) and the script(Rossi.R) are in
"C:/Documents and Settings/Val/My Documents/R_data/prd"
How should I write to read the file?
source(???) # what should be included here?
Rossi <- re
On Oct 28, 2009, at 10:04 AM, Val wrote:
Hi User's,
This might be a simple question but it is giving me a hard time as I
am a
new user.
I installed R version 2.9.2 (2009-08-24)
1. I just copied a short script from Fox (2002) as a practice and
wanted
to save it as Rossi.R.
How?
The
Hi Val,
Windows does not display extensions by default. Check the 'Type'
column; it should read 'R file'.
Keep in mind what you are dealing with; Rossi.R is a script, so you
cannot open it with read.table. You have to use source() for that.
Moreover, use the extension, as well (Rossi.R, not Rossi
Hi Val,
I am not sure what it is that you are trying to do.
"read.table"
Is not used to open an R script, but to open a data file.
You will also need to give the extension of the file when using the command
(someone please correct me if I am wrong).
If you wish to open an R script, I would just u
On Fri, Sep 25, 2009 at 10:18 AM, Henrik Bengtsson
wrote:
> You can use R.utils (on CRAN) to help you figure out why the file is
> not found or not readable.
>
> library("R.utils");
> pathname <- C:/Documents and Settings/ashta/My Documents/R_data/rel.dat";
> pathname <- Arguments$getReadablePath
You can use R.utils (on CRAN) to help you figure out why the file is
not found or not readable.
library("R.utils");
pathname <- C:/Documents and Settings/ashta/My Documents/R_data/rel.dat";
pathname <- Arguments$getReadablePathname(pathname);
rel <- read.table(pathname, quote="", header=FALSE, sep
Sometimes it is easiest to open a file using a file selection
widget. I keep this in my .Rprofile:
getOpenFile <- function(...){
require(tcltk)
return(tclvalue(tkgetOpenFile()))
}
With this you can find your file and open it with
rel <- read.table(getOpenFile(), quote="", header=FALSE, s
On 09/23/2009 10:42 PM, Ashta wrote:
Dear R-users,
I am a new user for R. I am eager to lean about it.
I wanted to read and summary of the a simple data file
I used the following,
rel<- read.table("C:/Documents and Settings/ashta/My
Documents/R_data/rel.dat", quote="",header=FALSE
1 - 100 of 132 matches
Mail list logo