TE allows multiple IterVars on one attaching path (i.e. two nested loops)
binding to one thread IterVar. It seems necessary for some schedules that
model stages having different thread allocation to have correct bounds to be
inferred. These schedules certainly represent valid and useful use
Sorry I didn't make it clear.
The code pasted in the first issue works well after adopting your solution.
Then I tested 'gpt2' model, it reported an error:
```
TypeError: int() argument must be a string, a bytes-like object or a number,
not ‘Call’
```
The test code is shown below:
```
from tvm
I finally figured it out. It was because I didn't specify "keys" when I create
the new 'mytarget'. Once I add 'mytarget' as the key, the dense_strategy
registration works like a charm...
It is really appreciated there'll be a tutorial/docs on how to add a new
target, :slight_smile:
---
[
I think that will be very useful, thank you
---
[Visit
Topic](https://discuss.tvm.ai/t/example-of-a-model-using-attention/6687/3) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubsc
@hht, I doubled checked, it seems the real problem might be that I declared
'mytarget' as a completely new target, instead of declaring it as a device
under target 'ext_dev'. Running
```
print(tvm.target.Target.current(allow_none=False)
```
Shows different result.
With VTA:
```
relay/backe
I think there is an example in PyTorch:
https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html
Hope this could help :-)
---
[Visit
Topic](https://discuss.tvm.ai/t/example-of-a-model-using-attention/6687/2) to
respond.
You are receiving this because you enabled mail
My research collaborators are interested in porting some RNNs with attention
into Relay. Are there existing public examples of Relay models that use
attention that I could use as a guide to implement some of these RNNs? I would
appreciate any examples or assistance
---
[Visit
Topic](http
> from tvm import relay
> import torch
> from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM
> import logging
>
> logging.basicConfig(level=logging.INFO)
I used the code you pasted in the first issue and its able to generate complete
relay function. Im not able to ge
Thanks!
I try to fix it by
```
def _tensortonum():
def _impl(inputs, input_types):
return inputs[0]
return _impl
```
Another error accurs:
```
TypeError: int() argument must be a string, a bytes-like object or a number,
not 'Call'
```
```
'58', %58 : Scalar = prim::ImplicitTe
[PR 5603](https://github.com/apache/incubator-tvm/pull/5603) will help to solve
the issue of `prim::ImplicitTensorToNum`
If you come across issue with matmul, you can merge [PR
5604](https://github.com/apache/incubator-tvm/pull/5604) as well.
---
[Visit
Topic](https://discuss.tvm.ai/t/pr
I dont think its fixed yet i am having same issues so any update regarding
this
---
[Visit Topic](https://discuss.tvm.ai/t/issue-with-static-tensor-array/6333/8)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](
11 matches
Mail list logo