I am trying to estimate home range size using 2 different methods in the
adehabitat package, but I am slightly confounded by the results.
## Attached is an R object file containing animal relocations with a field
for "id", and "x" & "y" coordinates ## (in metres)
load("temp")
require(adehabitat)
I am trying to estimate home range size using the plug-in method with kernel
density estimation in the kernel smoothing (ks) package. Unless there is
another way I am not familiar with, in order to calculate spatial area under
the space I need to convert my kde () object into a spatial object som
http://old.nabble.com/file/p26533942/temp temp
David Winsemius wrote:
>
>
> On Nov 26, 2009, at 12:40 PM, T.D.Rudolph wrote:
>
>>
>> I am trying to estimate home range size using the plug-in method
>> with kernel
>> density estimation in the ke
On Thu, Nov 26, 2009 at 6:56 PM, David Winsemius
wrote:
http://old.nabble.com/Export-kde-object-as-shapefile-tp26532782p26545381.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
https://stat.ethz.ch/ma
Tyler Dean Rudolph wrote:
>
> I am trying to estimate home range size using the plug-in method with
> kernel density estimation in the kernel smoothing (ks) package. Unless
> there is another way I am not familiar with, in order to calculate spatial
> area under the space I need to convert my
Is there any way to identify or infer the inflection points in a smooth
spline object? I am doing a comparison of various methods of time-series
analysis (polynomial regression, spline smoothing, recursive partitioning)
and I am specifically interested in obtaining the julian dates associated
wit
Is there any package or operation in R designed to conduct or facilitate
Quadrat Variance analysis of spatial data? Any leads would be much
appreciated as I have found very little in my searches thus far.
Tyler
--
View this message in context:
http://www.nabble.com/Quadrat-Variance-analysis-tp
hello there,
Is there a way of truncating in the opposite direction so as to retain only
the values to the right of the decimal??
i.e. rather than:
> trunc(39.5)
[1] 39
i would get something like:
> revtrunc(39.5)
[1] 0.5
I've been searching to no avail but I imagine there is a very simple
sol
But note:
>> revtrunc(-39.5)
> [1] 0.5
>
> I'm not sure what you'd want for negative numbers. One possibility:
>
> revtrunc <- function(x) { sign(x) * (x - floor(x)) }
>> revtrunc(39.5)
> [1] 0.5
>> revtrunc(-39.5)
> [1] -0.5
>
> Sarah
>
This definitely does the trick.
I knew there was an easier way!
Petr Pikal wrote:
>
> [EMAIL PROTECTED] napsal dne 27.05.2008 09:31:16:
>
>>
>> Hi Petr
>>
>> My mistake in omitting "replace=T" in the first part.
>> Unfortunately I oversimplified my problem as I'm not actually dealing
> with
I have a matrix of frequency counts from 0-160.
x<-as.matrix(c(0,1,0,0,1,0,0,0,1,0,0,0,0,1))
I would like to apply a function creating a new column (x[,2])containing
values equal to:
a) log(x[m,1]) if x[m,1] > 0; and
b) for all x[m,1]= 0, log(next x[m,1] > 0 / count of preceding zero values
+1)
In fact x[4,2] should = log(x[5,1]/2]
whereas x[3,2] = log(x[5,1/3])
i.e. The denominator in the log function equals the number of rows between
m==0 and m>0 (inclusive, hence the "+1")
Hope this helps!...
Charles C. Berry wrote:
>
> On Tue, 27 May 2008, T.D.Rudolph w
.6931472 0.000 -1.3862944
> -1.0986123 -0.6931472 0.000
> [10] -1.6094379 -1.3862944 -1.0986123 -0.6931472 0.000
>>
>>
>
>
> On Tue, May 27, 2008 at 8:04 PM, T.D.Rudolph <[EMAIL PROTECTED]>
> wrote:
>
>>
>> In fact x[4,2] should =
In fact x[4,2] should = log(x[5,1]/2]
whereas x[3,2] = log(x[5,1/3])
i.e. The denominator in the log function equals the number of rows between
m==0 and m>0 (inclusive, hence the "+1")
Hope this helps!
Charles C. Berry wrote:
>
> On Tue, 27 May 2008, T.D.Rudolph wrote:
&
In the following example:
x <- rnorm(1:100)
y <- seq(from=-2.5, to=3.5, by=0.5)
z <- as.matrix(table(cut(x,c(-Inf, y, +Inf
## I wish to transform the values in z
j <- log(z)
## Yet retain the row names
row.names(j)<-row.names(z)
Now, how can I go about creating a scatterplot with row.names(
I did have the problem of not having two continuous variables and this
approach circumvents this, allowing me in fact to plot the rownames.
Prof Brian Ripley wrote:
>
> On Tue, 27 May 2008, T.D.Rudolph wrote:
>
>>
>> In the following example:
>> x <- rnorm(1:
by=-1,
> length=y$lengths[.indx]))
> + else rep(log(y$values[.indx]), y$lengths[.indx])
> + })
>> unlist(result)
> [1] -0.6931472 0.000 -1.0986123 -0.6931472 0.000 -1.3862944
> -1.0986123 -0.6931472 0.000
> [10] -1.6094379 -1.3862944 -1.0986123 -0.6931472 0.0
/seq(y$lengths[.indx]+1, by=-1,
> length=y$lengths[.indx]))
> + else rep(log(y$values[.indx]), y$lengths[.indx])
> + })
>> unlist(result)
> [1] -0.6931472 0.000 -1.0986123 -0.6931472 0.000 -1.3862944
> -1.0986123 -0.6931472 0.000
> [10] -1.6094379 -1.3862
else rep(log(y$values[.indx]), y$lengths[.indx])
})
# but I am clearly missing something!
Does it not work because I haven't addressed what to do with the zeros and
log(0)=-Inf?
I've tried adding another "ifelse" but I still get the same result.
Can someone find the
/seq(y$lengths[.indx]+1, by=-1,
> length=y$lengths[.indx]))
> + else rep(log(y$values[.indx]), y$lengths[.indx])
> + })
>> unlist(result)
> [1] -0.6931472 0.000 -1.0986123 -0.6931472 0.000 -1.3862944
> -1.0986123 -0.6931472 0.000
> [10] -1.6094379 -1.3862
s log(1) = 0
T.D.Rudolph wrote:
>
> I'm trying to build on Jim's approach to change the parameters in the
> function, with new rules:
>
> 1. if (x[i]==0) NA
> 2. if (x[i]>0) log(x[i]/(number of consecutive zeros immediately preceding
> it +1))
>
>
2231436
the 15th element, 1, becomes log(1) = 0
T.D.Rudolph wrote:
>
> I'm trying to build on Jim's approach to change the parameters in the
> function, with new rules:
>
> 1. if (x[i]==0) NA
> 2. if (x[i]>0) log(x[i]/(number of consecutive zeros
I am trying to set up a function which processes my data according to the
following rules:
1. if (x[i]==0) NA
2. if (x[i]>0) log(x[i]/(number of consecutive zeros immediately preceding
it +1))
The data this will apply to include a variety of whole numbers not limited
to 1 & 0, a number of which
You are absolutely correct Marc; thank you for making this assumption!
While it will take some time for me to rightfully understand the logical
ordering of your proposed function, it certainly seems to produce the result
I was looking for, and economically at that. rle is indeed highly useful
1))
>
> On Mon, Jun 2, 2008 at 2:30 PM, T.D.Rudolph <[EMAIL PROTECTED]>
> wrote:
>>
>> I am trying to set up a function which processes my data according to the
>> following rules:
>>
>> 1. if (x[i]==0) NA
>> 2. if (x[i]>0) log(x[i]/(number of consecutive
I have a dataframe, x, with over 60,000 rows that contains one Factor, "id",
with 27 levels.
The dataframe contains numerous continuous values (along column "diff") per
day (column "date") for every level of id. I would like to select only one
row per animal per day, i.e. that containing the mi
information at all and retain it in the final output
Marc Schwartz wrote:
>
> on 06/13/2008 11:10 PM T.D.Rudolph wrote:
>> I have a dataframe, x, with over 60,000 rows that contains one Factor,
>> "id",
>> with 27 levels.
>> The dataframe contains numero
but I have no way of verifying without the id membership in the output.
Charilaos Skiadas-3 wrote:
>
>
> On Jun 14, 2008, at 1:25 AM, T.D.Rudolph wrote:
>
>>
>> aggregate() is indeed a useful function in this case, but it only
>> returns the
>> columns by wh
http://www.nabble.com/file/p18018170/subdata.csv subdata.csv
I've attached 100 rows of a data frame I am working with.
I have one factor, id, with 27 levels. There are two columns of reference
data, x and y (UTM coordinates), one column "date" in POSIXct format, and
one column "diff" in times
I have numerous objects, each containing continuous data representing the
same variable, movement rate, yet each having a different number of rows.
e.g.
d1<-as.matrix(rnorm(5))
d2<-as.matrix(rnorm(3))
d3<-as.matrix(rnorm(6))
How can I merge these three columns side-by-side in order to create a
525 0696
> http://www.burns-stat.com
> (home of S Poetry and "A Guide for the Unwilling S User")
>
> T.D.Rudolph wrote:
>> I have numerous objects, each containing continuous data representing the
>> same variable, movement rate, yet each having a different numb
Hi there,
I'm dealing with a pretty big dataset (~22,000 entries) with numerous
entries for every day over a period of several years. I have a column
"judy" (for Julian Day) with 0 beginning on Jan. 1st of every new year (I
want to compare tendencies between years). However, in order to control
I am trying to fit a very simple broken stick model using the package
"segmented" but I have hit a roadblock.
> str(data)
'data.frame': 18 obs. of 2 variables:
$ Bin : num 0.25 0.75 1.25 1.75 2.25 2.75 3.25 3.75 4.25 4.75 ...
$ LnFREQ: num 5.06 4.23 3.50 3.47 2.83 ...
I fit the lm eas
I thought this problem would be resolved when I switched to R version 2.7.0
(for Windows), but no - anytime I plot something that produces more than one
page of graphics, the graphics window starts by showing the first page,
until such time as I hit enter to show me the next page, at which time it
o generate a
new series of figures for review in the single window (as opposed to
numerous separate ones). At this point scrolling using PgUp and PgDn seems
to work fine.
Tyler
T.D.Rudolph wrote:
>
> I thought this problem would be resolved when I switched to R version
> 2.7.0 (f
Can anyone shed any light on this topic for us?
I would like to attempt agglomerative clustering with a contiguity
constraint (a.k.a. intermediate linkage clustering), as described by
Legendre & Legendre (1998, page 697)
Is there any code kicking around for this type of analysis specifically?
Does anyone know an alternate way of calculating R2n with spatial data other
than converting values into ltraj format (adehabitat package)?
I have a series of geographic xy coordinates (in metres). I imagine I can
subtract x1y1 from x2y2 to get the spatial difference (dxdy), but what
function
I conducted a frequency averaging procedure which left me with the data frame
below (Bin is an artifact of a cut() procedure and can be either
as.character or as.factor):
Bin Freq
1 (-180,-160] 7.904032
2 (-160,-140] 5.547901
3 (-140,-120] 4.522542
4 (-120,-100] 4.784184
5 (-
38 matches
Mail list logo