is
> true.
>
> This approach makes better use of your data, because when you correlate
> the
> observations, you're effectively "losing" variability (because
> correlations
> are doubly standardized) as well as degrees of freedom (you have 9 df
> within
> each i
Dear R-Users,
I am currently looking for a way to test the equality of two correlations
that are related in a very special way. Let me describe the situation with
an example.
- There are 100 respondents, and there are 2 points in time, t=1 and t=2.
- For each of the respondents and at each of
Dear R-Users,
I am working on an Hierarchical Bayes model and tried to replace the inner
for-loop (which loops over a list with n.observations elements) with truely
vectorized code (where I calculated everything based on ONE dataset over all
respondents).
However, when comparing the performance
Dear R-Users,
as I will start a huge simulation in a few weeks, I am about to buy a new
and fast PC. I have noticed, that the RAM has been the limiting factor in
many of my calculations up to now (I had 2 GB in my "old" system, but
Windows still used quite a lot of virtual memory), hence my new c
=rowSums(Designmat*Betamat2[rep(1:n.obs,rep(n.rowsperobs,n.obs)),])
If somebody can think of an even faster way: any comments are greatly
welcome!
Ralph79 wrote:
>
> Dear R-users,
>
> I am working on a problem that I am currently not able to solve
> efficiently. It is about
.param)
>
> Design.spl <- split(as.data.frame(Designmat), rep(1:n.obs,
> each=n.rowsperobs))
> res <- sapply(1:ncol(Betamat),
> function(x)as.matrix(Design.spl[[x]])%*%Betamat[,x])
>
>
>
> On 20/01/2008, Ralph79 <[EMAIL PROTECTED]> wrote:
>>
>>
Dear R-users,
I am working on a problem that I am currently not able to solve efficiently.
It is about multiplying one column of a matrix with only a certain number of
rows of another matrix.
Let me illustrate my problem with an example:
n.obs = 800
n.rowsperobs = 300
n.param = 23
Designmat =
7 matches
Mail list logo