Hi @xqdan, when you say "not in MindSpore for now", do you mean AKG is still a
standalone codegen toolkit? Or it currently has already been integrated into
your internal TensorFlow/PyTorch versions?
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-ansor-an-auto-scheduler-for-tvm-autotvm-v2-
Is there any details about the TVM + MKLDNN BERT integration work?
I would like to take a look to see its potential connection with Ansor.
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-ansor-an-auto-scheduler-for-tvm-autotvm-v2-0/7005/24)
to respond.
You are receiving this because you e
Thanks for the nice RFC.
And happy to see folks other than us also pay attention to the MLIR-as-a-bridge
design to integrate TVM as a backend for TensorFlow(or maybe more than
TensorFlow^-^).
Inside Alibaba, we are also working on the related things.
To be more specific, for static shape JIT
Is there any plan to integrate TVM as a dialect into MLIR?
So other components based on MLIR can leverage the capability of TVM, such as
high performance codegen, and fusion, etc.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on G
> Hi @yangjunpro @hello-hzb ,
> This project has been suspended for several months. I won't continue my work
> on the original branch.
> However, the push for an auto-scheduler is still interesting to a lot of
> people. I might work on auto-scheduler again with some Berkeley students.
> We'd lik
To unify the export interface at such a time point is a good design taste.
One small question:
In the RFC, it looks that the interface of relay.build() will be changed a
little from
return
graph_json, lib, params
to
return
compiled_graph_mode, params
It may looks like a breaking change.
Wil
@merrymercy
Hi Lianmin,
Thanks for the nice proposal. May I know the latest progress of the
auto-scheduling work?
It looks that for a long time there isn't any status update.
Regards
Jun
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view
> Awesome solution! Just curios: for shapes which are worse than cudnn/cublas,
> what kind of tuning is using?
Good point! We do have some internal discussions about whether we need to
automatically search the schedule space based on performance between TensorCore
and non-TensorCore kernel, sin
Nice to see other folks working on adding TensorCore support into TVM, we have
also been working on enhancing TVM to incorporate TensorCore schedule support.
If my understanding is correct, @Hzfengsy your solution is based on extending
TVM's intrinsic while our solution put most of the complexit
@kovasb Nice to see your interest into our TVM&TF NMT article:)
Also we have had some internal discussions regarding to adding non-TF DL
compiler backend into TF as a complementary for XLA, and TVM is absolutely one
of the great choices.
There are some principles I think we might need to follow
10 matches
Mail list logo