HI,
May be this helps:
lines1 <- readLines(textConnection('text to be ignored...
CDS 687..3158
/gene="AXL2"
/note="plasma membrane glycoprotein"
other text to be ignored...
CDS complement(3300..4037)
HI Map,
I am not sure what you really wanted. Perhaps, this helps:
dat <- structure(list(Date = c("2014-01-01 00:00:00", "2014-01-02 11:00:00",
"2014-01-02 22:00:00", "2014-01-03 03:00:00", "2014-01-01 00:00:00",
"2014-02-02 11:00:00", "2014-02-02 22:00:00", "2014-02-03 03:00:00",
"2014-02-01
I have the 5.5 Server but in the opt directory are only two files
libmysql.dll and libmysql.lib
Christian
Am 06.02.2014 16:29, schrieb Gabor Grothendieck:
> On Thu, Feb 6, 2014 at 3:52 PM, Christian Oswald
> wrote:
>> I understand, but in my case the R-Library is in my user-folder.
>> But I have
You must deal with identifying the time zone. I have found that setting TZ
environment variable appropriately for the data before converting character
values to POSIXct gives me the best results. This is actually easier for
standard-time-only data than for data with daylight savings time transit
No, you are perfectly fine using WLS. The constant of proportionality is the
estimated error variance, i.e., the square of the residual standard error
(as I think I said earlier).
John
You're right. That was a little hard for me to grasp. Thanks for the
patience.
Hi
I have a multivariate normal distribution in five variables. The
distribution is specified by a vector of means ('means') and a
variance-covariance matrix ('varcov'), both set up as global variables.
I'm trying to figure out the probabilities of each random variable
being the smallest.
Thanks. as.POSIXct works for the most part. The only problem is part of data
I'm working has it's own time zone. Is there a way to not have a time zone
displayed? My times do not change with Daylight saving.
--
View this message in context:
http://r.789695.n4.nabble.com/why-is-as-date-function-
This function returns date/times without timezone
strptime(dates,format="%d/%m/%Y %H:%M")
--
View this message in context:
http://r.789695.n4.nabble.com/why-is-as-date-function-not-working-for-me-dd-mm--h-mm-tp4684874p4684895.html
Sent from the R help mailing list archive at Nabble.com.
Hi,
You could use:
strptime(dates, format="%d/%m/%Y %H:%M")[1:2]
#[1] "2013-12-31 00:00:00" "2013-12-31 01:00:00"
as.POSIXlt(dates, format="%d/%m/%Y %H:%M")[1:2]
#[1] "2013-12-31 00:00:00" "2013-12-31 01:00:00"
A.K.
Thanks. as.POSIXct works for the most part. The only problem is part of
data I
Based on the following code, how can I add a column to this pivot table
output that will count CaseID's for each variable sum? CaseID is a factor.
# library(reshape)
# FLSA_Violation_Reason_melt <- melt(FLSA_ViolRsnfixed,
#id=c("CaseID", "ViolationDesc",
"Reas
On Thu, Feb 6, 2014 at 3:52 PM, Christian Oswald
wrote:
> I understand, but in my case the R-Library is in my user-folder.
> But I have investigate the files needed by RMySQL and found that the gcc
> searchs for libmysql.lib in MYSQL_HOME\lib\opt but it is only in lib.
> Copying it in lib/opt solv
I understand, but in my case the R-Library is in my user-folder.
But I have investigate the files needed by RMySQL and found that the gcc
searchs for libmysql.lib in MYSQL_HOME\lib\opt but it is only in lib.
Copying it in lib/opt solved the problem.
Thanks,
Christian
Am 06.02.2014 14:52, schri
On Thu, Feb 6, 2014 at 2:52 PM, Duncan Murdoch wrote:
> On 06/02/2014 1:39 PM, Christian Oswald wrote:
>>
>> Hello,
>>
>> how can I install it on a wrong place?
>>
>> "install.packages("RMySQL",type=source) don't work correct?
>
>
> Many Windows users have R installed in "c:\Program Files", and
Dear List
I am trying to use gam function in mgcv 17.26. I have a big data set of
about 40,000 data points. Every time that I run it ,it results in GCV score
of "0"
Family: gaussian
Link function: identity
Formula:
D ~ s(Ghazl_res) + s(Depth)
Estimated degrees of freedom:
3.77 2.9
Dear Marco,
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
> project.org] On Behalf Of Marco Inacio
> Sent: Thursday, February 06, 2014 12:41 PM
> To: R help
> Subject: Re: [R] proportional weights
>
>
> > I think we can blame Tim Hesterberg for the c
On 06/02/2014 1:39 PM, Christian Oswald wrote:
Hello,
how can I install it on a wrong place?
"install.packages("RMySQL",type=source) don't work correct?
Many Windows users have R installed in "c:\Program Files", and normal
users are not allowed to write there. The .libPaths() function wil
I am looking for ways to reduce my process size for calls to mclapply. I
have determined that the size of my process is creating too much overhead
for the forking to be faster than a serial run.I am looking for some
techniques to experiment with. I have been getting huge efficiency gains
usi
Because a Date object represents calendar dates, and calendar dates don't
have hours.
Use as.POSIXct() instead of as.Date()
(and spend a little more time with the documentation for as.Date)
-Don
--
Don MacQueen
Lawrence Livermore National Laboratory
7000 East Ave., L-627
Livermore, CA 94550
92
I would guess the problem is that the file is not where you think it is
or that you spelled the file name and/or path incorrectly.
An easy thing to try is:
read.table(file=file.choose(), header=TRUE, sep="\t")
The function file.choose() will open a window where you'll have to
choose the file f
You could also try:
library(gsubfn)
strapply(gsub("\\d+<|>\\d+","",vec1),"([0-9]+)",as.numeric,simplify=c)
A.K.
On Thursday, February 6, 2014 1:55 PM, arun wrote:
Hi,
One way would be:
vec1 <- c("CDS 3300..4037", "CDS
complement(3300..4037)", "CDS 3300
When starting out I sometimes find it easier to do the following:
Ceosalary<-read.table(file.choose(),sep="\t")
This will give you a dialog box to find the file you want and you won't have to
worry about getting the full path exactly right.
Hth,
Mike
W. Michael Conklin
Executive Vice Preside
as.Date produces Dates only, with no time information, even if you try
to supply it with hours + minutes.
For dates+times, use as.POSIXct() or as.POSIXlt() in place of
as.Date(). POSIXct produces a numeric value for the number of seconds
since your specified origin time (usually 1970-01-01 00:00),
I got it:
library(rjson)
library(plyr)
test<-fromJSON(file=url("http://api.census.gov/data/2010/sf1?key=mykey&get=P0030001,NAME&for=county:*&in=state:48";))
test2<-ldply(test)[-1,]
names(test2)<-ldply(test)[1,]
head(test2)
P0030001 NAME state county
258458 Anderson County48
Hi everyone, this is my first time using r and I think I'm overlooking
something small and I just need some help seeing it. I have a file in
notepad that I need to read into r.
> ceosalary<-read.table(file="C:/Users/mz106_010/Desktop/ceosalary.csv",header
> = TRUE,sep="\t")
Error in file(file, "rt"
Hello,
how can I install it on a wrong place?
"install.packages("RMySQL",type=source) don't work correct?
Christian
Am 06.02.2014 09:28, schrieb Duncan Murdoch:
> On 06/02/2014 8:43 AM, Christian Oswald wrote:
>> Hello,
>>
>> I also don't found a solution for this problem. RMySQL works very
Why am I know getting hours after I convert the date?
dates <- c('31/12/2013 0:00', '31/12/2013 1:00', '31/12/2013 2:00',
'31/12/2013 3:00', '31/12/2013 4:00', '31/12/2013 5:00', '31/12/2013
6:00', '31/12/2013 7:00', '31/12/2013 8:00', '31/12/2013 9:00',
'31/12/2013 10:00', '31/12/2013 1
Hi,
One way would be:
vec1 <- c("CDS 3300..4037", "CDS
complement(3300..4037)", "CDS 3300<..4037", "CDS
join(21467..26641,27577..28890)", "CDS
complement(join(30708..31700,31931..31984))", "CDS 3300<..>4037")
library(s
HI,
The question is not clear. If it is to get the hours,
strptime(dates, format="%d/%m/%Y %H:%M")$hour
# [1] 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 0
#or
as.numeric(format(as.POSIXct(dates, format="%d/%m/%Y %H:%M"),"%H"))
# [1] 0 1 2 3 4 5 6 7 8 9 10
Dear list,
I've gotten access to the US Census Bureau's developer API for accessing
various datasets they maintain. Here is the link:
http://www.census.gov/developers/
They say that:
"Data are accessible to software developers through a stateless HTTP GET
request. Up to 50 variables can be reques
The problem, as you mention, is that once you create the second plot,
the information from the 1st plot is lost. One option is to create
the first plot, then convert all the points used to create the first
plot into device coordinates rather than user coordinates (using
grconvertX and grconvertY).
> Various questions about missing values in the R ncdf package, and how
they are
> handled if the file lacks the standard "_FillValue" attribute.
Hi Andre,
It sounds like the fundamental problem is that your data files are using a
missval, but that fact is not recorded in the file's metadata as a
Hi Venkata,
That example reads into R fine for me. I copied and saved it as
tmp.csv and simply read it in with
dat <- read.csv("tmp.csv")
which gave me a data.frame with one row and 78 columns as expected.
This worked in three different environments (linux, mac, windows), and
with different vers
I think we can blame Tim Hesterberg for the confusion:
He writes
"
I'll add:
* inverse-variance weights, where var(y for observation) = 1/weight (as
opposed to just being inversely proportional to the weight) *
"
And, although I'm not a native English speaker, I think there's a spurious
c
On 06/02/2014 10:00 AM, Jeremy Clark wrote:
Dear All,
I would like to be able to associate a list of vectors (one vector of
which is to be called later) with some other character and numeric
data. I've tried two methods:
1) I can put the vector names in quotes into the dataframe, and then
extra
> I would like to be able to associate a list of vectors (one vector of which
> is to
> be called later) with some other character and numeric data.
>
> Probably I'm missing something basic ?
see ?list
Lists accept vectors of different lengths.
S Ellison
***
Hi,
Try:
dat <- read.table(text="Emails
mal...@gmail.com
mah...@gmail.com
x...@gmail.com
ravi_...@yahoo.com
lavk@rediff.com
xy@12_g.com",sep="",header=TRUE,stringsAsFactors=FALSE)
vec1 <-
gsub("\\.[[:alnum:]]+$","",gsub("^([[:alpha:]]+)(\\d+.*)","\\1_\\2",dat$Emails))
indx1 <- grep("[[:punct
Hi,
I am using a plotting window, splitting it into two and using the identify
function on the plot in the first column to determine which plot in the
second column should be drawn. The first time, this works fine. However,
the second time (when I want to refresh the second plot based on the outpu
Dear All,
I would like to be able to associate a list of vectors (one vector of
which is to be called later) with some other character and numeric
data. I've tried two methods:
1) I can put the vector names in quotes into the dataframe, and then
extract the vector name - but this is just a charac
Hi all,
I got currently some trouble with missvals in netCDF files:
I have a netCDF file, written with an unknown program, which shows via ncdump
no fillval or missval for its variables. Within the variables obviously a
missval of 9.96921e+36 is used.
When I read it into R via the ncdf package
HI,
I am not sure this is what you meant.
a <- read.table(text="1 2 3 4 5 6
1 Mal 1 Layer 22 M 10
2 Mahesh 2 Actor 45 M 15000
3 Tarak 3 Actor 30 M 15000
4 Pawan 4 Actor 47
https://github.com/hadley/devtools/wiki/Reproducibility
http://stackoverflow.com/questions/5963269/how-to-make-a-great-r-reproducible-example
We need some explanation of what " the expected value" is. A mean perhaps?
John Kane
Kingston ON Canada
> -Original Message-
> From: henrik.ala
I think we can blame Tim Hesterberg for the confusion:
He writes
"
I'll add:
* inverse-variance weights, where var(y for observation) = 1/weight (as
opposed to just being inversely proportional to the weight) *
"
And, although I'm not a native English speaker, I think there's a spurious
com
I think you should have a look at svyglm() from the survey package.
My two cents
Le mercredi 05 février 2014 à 14:41 +1300, Rolf Turner a écrit :
> You should direct your inquiry to R-help, not to me personally. I am
> taking the liberty of cc-ing my reply back to the list.
>
> I really have
Dear Marco,
What I said in the 2007 r-help posting to which you refer is, "The weights
used by lm() are (inverse-)'variance weights,' reflecting the variances of
the errors, with observations that have low-variance errors therefore being
accorded greater weight in the resulting WLS regression." ?l
> Is there a way to determine which, if any, CRAN packages depend on my CRAN
> package, mondate?
> devtools::revdep("mondate")
[1] "zoo"
If you want to contact the maintainers:
> devtools::revdep_maintainers("mondate")
[1] "Achim Zeileis "
If you want all recursive dependencies:
> length(devt
CRAN lists these on the mondate page
On 7 Feb 2014 01:30, "Dan Murphy" wrote:
> Is there a way to determine which, if any, CRAN packages depend on my CRAN
> package, mondate?
> Thanks,
> Dan Murphy
>
> [[alternative HTML version deleted]]
>
> _
On 06/02/2014 9:24 AM, Dan Murphy wrote:
Is there a way to determine which, if any, CRAN packages depend on my CRAN
package, mondate?
You want the "reverse dependencies". CRAN lists those on the page of
each package; for yours, it says that zoo suggests it. It uses the
dependsOnPkgs() funct
On 06/02/2014 8:43 AM, Christian Oswald wrote:
Hello,
I also don't found a solution for this problem. RMySQL works very well
under Linux but not under Windows. You can try RODBC.
RMySQL works fine for me in Windows.
The most common problem people have installing packages in Windows is
that t
Is there a way to determine which, if any, CRAN packages depend on my CRAN
package, mondate?
Thanks,
Dan Murphy
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read
Thanks for the answers.
Dear Marco and Goran,
Perhaps the documentation could be clearer, but it is after all a brief help
page. Using weights of 2 to lm() is *not* equivalent to entering the
observation twice. The weights are variance weights, not case weights.
According to your post here:
Dear John,
thanks for the clarification! The lesson to be learned is that one
should be aware of the fact that weights may mean different things in
different functions, and sometimes different things in the same function
(glm)!
Göran
On 02/06/2014 02:17 PM, John Fox wrote:
Dear Marco and G
Hello,
I also don't found a solution for this problem. RMySQL works very well
under Linux but not under Windows. You can try RODBC.
Christian
Am 05.02.2014 14:37, schrieb Peretz, Eliran:
>
> Hi ,
>
> I read your post and followed your instructions but still couldn't install
> RMySQL by getting
Hello
I would like to map the population of the European countries in 2011.
I am using the spatial shapefiles of Europe published by EUROSTAT.
I applyed the script below of Markus Kainubut but I had a problem with
the map.
Any ideas ?
Thanks for your help
##
Dear Marco and Goran,
Perhaps the documentation could be clearer, but it is after all a brief help
page. Using weights of 2 to lm() is *not* equivalent to entering the
observation twice. The weights are variance weights, not case weights.
You can see this by looking at the whole summary() outpu
Hi Pascal,
It worked!
Thanks a lot :-)
Soumyadeep
From: skalp.oet...@gmail.com [skalp.oet...@gmail.com] On Behalf Of Pascal
Oettli [kri...@ymail.com]
Sent: Thursday, February 06, 2014 3:04 AM
To: Soumyadeep Nandi
Cc: r-help@r-project.org
Subject: Re: [R]
Hello,
I might be wrong, but I think it is in degree. Let's consider the
length of a degree of longitude at 38N in kilometers: ~88 km
Thus, 40d * 88km = 3520km
Hope this helps,
Pascal
On 5 February 2014 00:43, Alicia wrote:
> Dear R-help,
>
> I used the correlog function of pgirmess package t
Dear Ista,
I copied my data below
UNIQUEID,FINDINGSID,ORGNUMRES,STNUMRES,CONVRES,VISITDY,ORGCHARRES,STCHARRES,NOMINALDAY,NOMINALDATE,MEASRMTDAY,MEASRMTDATE,INPUTDATE,NEOPLASMNAME,TUMORCLASSNAME,CATDOMAIN,CATDID,SPECIMENTYP,SPTDID,PCDOMAIN,USUBJID,PCDID,TESTDOMAIN,TSTDID,ORRESUNIT,RESDID,SUBJECTSID
On 05/02/14 22:40, Marco Inacio wrote:
Hello all, can help clarify something?
According to R's lm() doc:
Non-NULL weights can be used to indicate that different observations
have different variances (with the values in weights being inversely
*proportional* to the variances); or equivalently,
58 matches
Mail list logo